Browse Category: Entity Framework

Everything Entity Framework, your favourite Object Relational Mapper (ORM)

Create a RESTful API with authentication using Web API and Jwt

Web API is a feature of the ASP .NET framework that dramatically simplifies building RESTful (REST like) HTTP services that are cross platform and device and browser agnostic. With Web API, you can create endpoints that can be accessed using a combination of descriptive URLs and HTTP verbs. Those endpoints can serve data back to the caller as either JSON or XML that is standards compliant. With JSON Web Tokens (Jwt), which are typically stateless, you can add an authentication and authorization layer enabling you to restrict access to some or all of your API.

The purpose of this tutorial is to develop the beginnings of a Book Store API, using Microsoft Web API with (C#), which authenticates and authorizes each requests, exposes OAuth2 endpoints, and returns data about books and reviews for consumption by the caller. The caller in this case will be Postman, a useful utility for querying API’s.

In a follow up to this post we will write a front end to interact with the API directly.

Set up

Open Visual Studio (I will be using Visual Studio 2015 Community edition, you can use whatever version you like) and create a new Empty project, ensuring you select the Web API option;

Where you save the project is up to you, but I will create my projects under *C:\Source*. For simplicity you might want to do the same.

New Project

Next, packages.

Packages

Open up the packages.config file. Some packages should have already been added to enable Web API itself. Please add the the following additional packages;

install-package EntityFramework
install-package Microsoft.AspNet.Cors
install-package Microsoft.AspNet.Identity.Core
install-package Microsoft.AspNet.Identity.EntityFramework
install-package Microsoft.AspNet.Identity.Owin
install-package Microsoft.AspNet.WebApi.Cors
install-package Microsoft.AspNet.WebApi.Owin
install-package Microsoft.Owin.Cors
install-package Microsoft.Owin.Security.Jwt
install-package Microsoft.Owin.Host.SystemWeb
install-package System.IdentityModel.Tokens.Jwt
install-package Thinktecture.IdentityModel.Core

These are the minimum packages required to provide data persistence, enable CORS (Cross-Origin Resource Sharing), and enable generating and authenticating/authorizing Jwt’s.

Entity Framework

We will use Entity Framework for data persistence, using the Code-First approach. Entity Framework will take care of generating a database, adding tables, stored procedures and so on. As an added benefit, Entity Framework will also upgrade the schema automatically as we make changes. Entity Framework is perfect for rapid prototyping, which is what we are in essence doing here.

Create a new IdentityDbContext called BooksContext, which will give us Users, Roles and Claims in our database. I like to add this under a folder called Core, for organization. We will add our entities to this later.

namespace BooksAPI.Core
{
    using Microsoft.AspNet.Identity.EntityFramework;

    public class BooksContext : IdentityDbContext
    {

    }
}

Claims are used to describe useful information that the user has associated with them. We will use claims to tell the client which roles the user has. The benefit of roles is that we can prevent access to certain methods/controllers to a specific group of users, and permit access to others.

Add a DbMigrationsConfiguration class and allow automatic migrations, but prevent automatic data loss;

namespace BooksAPI.Core
{
    using System.Data.Entity.Migrations;

    public class Configuration : DbMigrationsConfiguration<BooksContext>
    {
        public Configuration()
        {
            AutomaticMigrationsEnabled = true;
            AutomaticMigrationDataLossAllowed = false;
        }
    }
}

Whilst losing data at this stage is not important (we will use a seed method later to populate our database), I like to turn this off now so I do not forget later.

Now tell Entity Framework how to update the database schema using an initializer, as follows;

namespace BooksAPI.Core
{
    using System.Data.Entity;

    public class Initializer : MigrateDatabaseToLatestVersion<BooksContext, Configuration>
    {
    }
}

This tells Entity Framework to go ahead and upgrade the database to the latest version automatically for us.

Finally, tell your application about the initializer by updating the Global.asax.cs file as follows;

namespace BooksAPI
{
    using System.Data.Entity;
    using System.Web;
    using System.Web.Http;
    using Core;

    public class WebApiApplication : HttpApplication
    {
        protected void Application_Start()
        {
            GlobalConfiguration.Configure(WebApiConfig.Register);
            Database.SetInitializer(new Initializer());
        }
    }
}

Data Provider

By default, Entity Framework will configure itself to use LocalDB. If this is not desirable, say you want to use SQL Express instead, you need to make the following adjustments;

Open the Web.config file and delete the following code;

<entityFramework>
    <defaultConnectionFactory type="System.Data.Entity.Infrastructure.LocalDbConnectionFactory, EntityFramework">
        <parameters>
            <parameter value="mssqllocaldb" />
        </parameters>
    </defaultConnectionFactory>
    <providers>
        <provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer" />
    </providers>
</entityFramework>

And add the connection string;

<connectionStrings>
    <add name="BooksContext" providerName="System.Data.SqlClient" connectionString="Server=.;Database=Books;Trusted_Connection=True;" />
</connectionStrings>

Now we’re using SQL Server directly (whatever flavour that might be) rather than LocalDB.

JSON

Whilst we’re here, we might as well configure our application to return camel-case JSON (thisIsCamelCase), instead of the default pascal-case (ThisIsPascalCase).

Add the following code to your Application_Start method;

var formatters = GlobalConfiguration.Configuration.Formatters;
var jsonFormatter = formatters.JsonFormatter;
var settings = jsonFormatter.SerializerSettings;
settings.Formatting = Formatting.Indented;
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();

There is nothing worse than pascal-case JavaScript.

CORS (Cross-Origin Resource Sharing)

Cross-Origin Resource Sharing, or CORS for short, is when a client requests access to a resource (an image, or say, data from an endpoint) from an origin (domain) that is different from the domain where the resource itself originates.

This step is completely optional. We are adding in CORS support here because when we come to write our client app in subsequent posts that follow on from this one, we will likely use a separate HTTP server (for testing and debugging purposes). When released to production, these two apps would use the same host (Internet Information Services (IIS)).

To enable CORS, open WebApiConfig.cs and add the following code to the beginning of the Register method;

var cors = new EnableCorsAttribute("*", "*", "*");
config.EnableCors(cors);
config.MessageHandlers.Add(new PreflightRequestsHandler());

And add the following class (in the same file if you prefer for quick reference);

public class PreflightRequestsHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (request.Headers.Contains("Origin") && request.Method.Method == "OPTIONS")
        {
            var response = new HttpResponseMessage {StatusCode = HttpStatusCode.OK};
            response.Headers.Add("Access-Control-Allow-Origin", "*");
            response.Headers.Add("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization");
            response.Headers.Add("Access-Control-Allow-Methods", "*");
            var tsc = new TaskCompletionSource<HttpResponseMessage>();
            tsc.SetResult(response);
            return tsc.Task;
        }
        return base.SendAsync(request, cancellationToken);
    }
}

In the CORS workflow, before sending a DELETE, PUT or POST request, the client sends an OPTIONS request to check that the domain from which the request originates is the same as the server. If the request domain and server domain are not the same, then the server must include various access headers that describe which domains have access. To enable access to all domains, we just respond with an origin header (Access-Control-Allow-Origin) with an asterisk to enable access for all.

The Access-Control-Allow-Headers header describes which headers the API can accept/is expecting to receive. The Access-Control-Allow-Methods header describes which HTTP verbs are supported/permitted.

See Mozilla Developer Network (MDN) for a more comprehensive write-up on Cross-Origin Resource Sharing (CORS).

Data Model

With Entity Framework configured, lets create our data structure. The API will expose books, and books will have reviews.

Under the Models folder add a new class called Book. Add the following code;

namespace BooksAPI.Models
{
    using System.Collections.Generic;

    public class Book
    {
        public int Id { get; set; }
        public string Title { get; set; }
        public string Description { get; set; }
        public decimal Price { get; set; }
        public string ImageUrl { get; set; }

        public virtual List<Review> Reviews { get; set; }
    }
}

And add Review, as shown;

namespace BooksAPI.Models
{
    public class Review
    {
        public int Id { get; set; }    
        public string Description { get; set; }    
        public int Rating { get; set; }
        public int BookId { get; set; }
    }
}

Add these entities to the IdentityDbContext we created earlier;

public class BooksContext : IdentityDbContext
{
    public DbSet<Book> Books { get; set; }
    public DbSet<Review> Reviews { get; set; }
}

Be sure to add in the necessary using directives.

A couple of helpful abstractions

We need to abstract a couple of classes that we need to make use of, in order to keep our code clean and ensure that it works correctly.

Under the Core folder, add the following classes;

public class BookUserManager : UserManager<IdentityUser>
{
    public BookUserManager() : base(new BookUserStore())
    {
    }
}

We will make heavy use of the UserManager<T> in our project, and we don’t want to have to initialise it with a UserStore<T> every time we want to make use of it. Whilst adding this is not strictly necessary, it does go a long way to helping keep the code clean.

Now add another class for the UserStore, as shown;

public class BookUserStore : UserStore&lt;IdentityUser&gt;
{
    public BookUserStore() : base(new BooksContext())
    {
    }
}

This code is really important. If we fail to tell the UserStore which DbContext to use, it falls back to some default value.

A network-related or instance-specific error occurred while establishing a connection to SQL Server

I’m not sure what the default value is, all I know is it doesn’t seem to correspond to our applications DbContext. This code will help prevent you from tearing your hair out later wondering why you are getting the super-helpful error message shown above.

API Controller

We need to expose some data to our client (when we write it). Lets take advantage of Entity Frameworks Seed method. The Seed method will pre-populate some books and reviews automatically for us.

Instead of dropping the code in directly for this class (it is very long), please refer to the Configuration.cs file on GitHub.

This code gives us a little bit of starting data to play with, instead of having to add a bunch of data manually each time we make changes to our schema that require the database to be re-initialized (not really in our case as we have an extremely simple data model, but in larger applications this is very useful).

Books Endpoint

Next, we want to create the RESTful endpoint that will retrieve all the books data. Create a new Web API controller called BooksController and add the following;

public class BooksController : ApiController
{
    [HttpGet]
    public async Task<IHttpActionResult> Get()
    {
        using (var context = new BooksContext())
        {
            return Ok(await context.Books.Include(x => x.Reviews).ToListAsync());
        }
    }
}

With this code we are fully exploiting recent changes to the .NET framework; the introduction of async and await. Writing asynchronous code in this manner allows the thread to be released whilst data (Books and Reviews) is being retrieved from the database and converted to objects to be consumed by our code. When the asynchronous operation is complete, the code picks up where it was up to and continues executing. (By which, we mean the hydrated data objects are passed to the underlying framework and converted to JSON/XML and returned to the client).

Reviews Endpoint

We’re also going to enable authorized users to post reviews and delete reviews. For this we will need a ReviewsController with the relevant Post and Delete methods. Add the following code;

Create a new Web API controller called ReviewsController and add the following code;

public class ReviewsController : ApiController
{
    [HttpPost]
    public async Task<IHttpActionResult> Post([FromBody] ReviewViewModel review)
    {
        using (var context = new BooksContext())
        {
            var book = await context.Books.FirstOrDefaultAsync(b => b.Id == review.BookId);
            if (book == null)
            {
                return NotFound();
            }

            var newReview = context.Reviews.Add(new Review
            {
                BookId = book.Id,
                Description = review.Description,
                Rating = review.Rating
            });

            await context.SaveChangesAsync();
            return Ok(new ReviewViewModel(newReview));
        }
    }

    [HttpDelete]
    public async Task<IHttpActionResult> Delete(int id)
    {
        using (var context = new BooksContext())
        {
            var review = await context.Reviews.FirstOrDefaultAsync(r => r.Id == id);
            if (review == null)
            {
                return NotFound();
            }

            context.Reviews.Remove(review);
            await context.SaveChangesAsync();
        }
        return Ok();
    }
}

There are a couple of good practices in play here that we need to highlight.

The first method, Post allows the user to add a new review. Notice the parameter for the method;

[FromBody] ReviewViewModel review

The [FromBody] attribute tells Web API to look for the data for the method argument in the body of the HTTP message that we received from the client, and not in the URL. The second parameter is a view model that wraps around the Review entity itself. Add a new folder to your project called ViewModels, add a new class called ReviewViewModel and add the following code;

public class ReviewViewModel
{
    public ReviewViewModel()
    {
    }

    public ReviewViewModel(Review review)
    {
        if (review == null)
        {
            return;
        }

        BookId = review.BookId;
        Rating = review.Rating;
        Description = review.Description;
    }

    public int BookId { get; set; }
    public int Rating { get; set; }
    public string Description { get; set; }

    public Review ToReview()
    {
        return new Review
        {
            BookId = BookId,
            Description = Description,
            Rating = Rating
        };
    }
}

We are just copying all he properties from the Review entity to the ReviewViewModel entity and vice-versa. So why bother? First reason, to help mitigate a well known under/over-posting vulnerability (good write up about it here) inherent in most web services. Also, it helps prevent unwanted information being sent to the client. With this approach we have to explicitly expose data to the client by adding properties to the view model.

For this scenario, this approach is probably a bit overkill, but I highly recommend it keeping your application secure is important, as well as is the need to prevent leaking of potentially sensitive information. A tool I’ve used in the past to simplify this mapping code is AutoMapper. I highly recommend checking out.

Important note: In order to keep our API RESTful, we return the newly created entity (or its view model representation) back to the client for consumption, removing the need to re-fetch the entire data set.

The Delete method is trivial. We accept the Id of the review we want to delete as a parameter, then fetch the entity and finally remove it from the collection. Calling SaveChangesAsync will make the change permanent.

Meaningful response codes

We want to return useful information back to the client as much as possible. Notice that the Post method returns NotFound(), which translates to a 404 HTTP status code, if the corresponding Book for the given review cannot be found. This is useful for client side error handling. Returning Ok() will return 200 (HTTP ‘Ok’ status code), which informs the client that the operation was successful.

Authentication and Authorization Using OAuth and JSON Web Tokens (JWT)

My preferred approach for dealing with authentication and authorization is to use JSON Web Tokens (JWT). We will open up an OAuth endpoint to client credentials and return a token which describes the users claims. For each of the users roles we will add a claim (which could be used to control which views the user has access to on the client side).

We use OWIN to add our OAuth configuration into the pipeline. Add a new class to the project called Startup.cs and add the following code;

using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof (BooksAPI.Startup))]

namespace BooksAPI
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureOAuth(app);
        }
    }
}

Notice that Startup is a partial class. I’ve done that because I want to keep this class as simple as possible, because as the application becomes more complicated and we add more and more middle-ware, this class will grow exponentially. You could use a static helper class here, but the preferred method from the MSDN documentation seems to be leaning towards using partial classes specifically.

Under the App_Start folder add a new class called Startup.OAuth.cs and add the following code;

using System;
using System.Configuration;
using BooksAPI.Core;
using BooksAPI.Identity;
using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.DataHandler.Encoder;
using Microsoft.Owin.Security.Jwt;
using Microsoft.Owin.Security.OAuth;
using Owin;

namespace BooksAPI
{
    public partial class Startup
    {
        public void ConfigureOAuth(IAppBuilder app)
        {            
        }
    }
}

Note. When I wrote this code originally I encountered a quirk. After spending hours pulling out my hair trying to figure out why something was not working, I eventually discovered that the ordering of the code in this class is very important. If you don’t copy the code in the exact same order, you may encounter unexpected behaviour. Please add the code in the same order as described below.

OAuth secrets

First, add the following code;

var issuer = ConfigurationManager.AppSettings["issuer"];
var secret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["secret"]);
  • Issuer – a unique identifier for the entity that issued the token (not to be confused with Entity Framework’s entities)
  • Secret – a secret key used to secure the token and prevent tampering

I keep these values in the Web configuration file (Web.config). To be precise, I split these values out into their own configuration file called keys.config and add a reference to that file in the main Web.config. I do this so that I can exclude just the keys from source control by adding a line to my .gitignore file.

To do this, open Web.config and change the <appSettings> section as follows;

<appSettings file="keys.config">
</appSettings>

Now add a new file to your project called keys.config and add the following code;

<appSettings>
  <add key="issuer" value="http://localhost/"/>
  <add key="secret" value="IxrAjDoa2FqElO7IhrSrUJELhUckePEPVpaePlS_Xaw"/>
</appSettings>

Adding objects to the OWIN context

We can make use of OWIN to manage instances of objects for us, on a per request basis. The pattern is comparable to IoC, in that you tell the “container” how to create an instance of a specific type of object, then request the instance using a Get<T> method.

Add the following code;

app.CreatePerOwinContext(() => new BooksContext());
app.CreatePerOwinContext(() => new BookUserManager());

The first time we request an instance of BooksContext for example, the lambda expression will execute and a new BooksContext will be created and returned to us. Subsequent requests will return the same instance.

Important note: The life-cycle of object instance is per-request. As soon as the request is complete, the instance is cleaned up.

Enabling Bearer Authentication/Authorization

To enable bearer authentication, add the following code;

app.UseJwtBearerAuthentication(new JwtBearerAuthenticationOptions
{
    AuthenticationMode = AuthenticationMode.Active,
    AllowedAudiences = new[] { "Any" },
    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
    {
        new SymmetricKeyIssuerSecurityTokenProvider(issuer, secret)
    }
});

The key takeaway of this code;

  • State who is the audience (we’re specifying “Any” for the audience, as this is a required field but we’re not fully implementing it).
  • State who is responsible for generating the tokens. Here we’re using SymmetricKeyIssuerSecurityTokenProvider and passing it our secret key to prevent tampering. We could use the X509CertificateSecurityTokenProvider, which uses a X509 certificate to secure the token (but I’ve found these to be overly complex in the past and I prefer a simpler implementation).

This code adds JWT bearer authentication to the OWIN pipeline.

Enabling OAuth

We need to expose an OAuth endpoint so that the client can request a token (by passing a user name and password).

Add the following code;

app.UseOAuthAuthorizationServer(new OAuthAuthorizationServerOptions
{
    AllowInsecureHttp = true,
    TokenEndpointPath = new PathString("/oauth2/token"),
    AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(30),
    Provider = new CustomOAuthProvider(),
    AccessTokenFormat = new CustomJwtFormat(issuer)
});

Some important notes with this code;

  • We’re going to allow insecure HTTP requests whilst we are in development mode. You might want to disable this using a #IF Debug directive so that you don’t allow insecure connections in production.
  • Open an endpoint under /oauth2/token that accepts post requests.
  • When generating a token, make it expire after 30 minutes (1800 seconds).
  • We will use our own provider, CustomOAuthProvider, and formatter, CustomJwtFormat, to take care of authentication and building the actual token itself.

We need to write the provider and formatter next.

Formatting the JWT

Create a new class under the Identity folder called CustomJwtFormat.cs. Add the following code;

namespace BooksAPI.Identity
{
    using System;
    using System.Configuration;
    using System.IdentityModel.Tokens;
    using Microsoft.Owin.Security;
    using Microsoft.Owin.Security.DataHandler.Encoder;
    using Thinktecture.IdentityModel.Tokens;

    public class CustomJwtFormat : ISecureDataFormat<AuthenticationTicket>
    {
        private static readonly byte[] _secret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["secret"]);
        private readonly string _issuer;

        public CustomJwtFormat(string issuer)
        {
            _issuer = issuer;
        }

        public string Protect(AuthenticationTicket data)
        {
            if (data == null)
            {
                throw new ArgumentNullException(nameof(data));
            }

            var signingKey = new HmacSigningCredentials(_secret);
            var issued = data.Properties.IssuedUtc;
            var expires = data.Properties.ExpiresUtc;

            return new JwtSecurityTokenHandler().WriteToken(new JwtSecurityToken(_issuer, null, data.Identity.Claims, issued.Value.UtcDateTime, expires.Value.UtcDateTime, signingKey));
        }

        public AuthenticationTicket Unprotect(string protectedText)
        {
            throw new NotImplementedException();
        }
    }
}

This is a complicated looking class, but its pretty straightforward. We are just fetching all the information needed to generate the token, including the claims, issued date, expiration date, key and then we’re generating the token and returning it back.

Please note: Some of the code we are writing today was influenced by JSON Web Token in ASP.NET Web API 2 using OWIN by Taiseer Joudeh. I highly recommend checking it out.

The authentication bit

We’re almost there, honest! Now we want to authenticate the user.

using System.Linq;
using System.Security.Claims;
using System.Security.Principal;
using System.Threading;
using System.Threading.Tasks;
using System.Web;
using BooksAPI.Core;
using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.AspNet.Identity.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.OAuth;

namespace BooksAPI.Identity
{
    public class CustomOAuthProvider : OAuthAuthorizationServerProvider
    {
        public override Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {
            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] {"*"});

            var user = context.OwinContext.Get<BooksContext>().Users.FirstOrDefault(u => u.UserName == context.UserName);
            if (!context.OwinContext.Get<BookUserManager>().CheckPassword(user, context.Password))
            {
                context.SetError("invalid_grant", "The user name or password is incorrect");
                context.Rejected();
                return Task.FromResult<object>(null);
            }

            var ticket = new AuthenticationTicket(SetClaimsIdentity(context, user), new AuthenticationProperties());
            context.Validated(ticket);

            return Task.FromResult<object>(null);
        }

        public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            context.Validated();
            return Task.FromResult<object>(null);
        }

        private static ClaimsIdentity SetClaimsIdentity(OAuthGrantResourceOwnerCredentialsContext context, IdentityUser user)
        {
            var identity = new ClaimsIdentity("JWT");
            identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName));
            identity.AddClaim(new Claim("sub", context.UserName));

            var userRoles = context.OwinContext.Get<BookUserManager>().GetRoles(user.Id);
            foreach (var role in userRoles)
            {
                identity.AddClaim(new Claim(ClaimTypes.Role, role));
            }

            return identity;
        }
    }
}

As we’re not checking the audience, when ValidateClientAuthentication is called we can just validate the request. When the request has a grant_type of password, which all our requests to the OAuth endpoint will have, the above GrantResourceOwnerCredentials method is executed. This method authenticates the user and creates the claims to be added to the JWT.

Testing

There are 2 tools you can use for testing this.

Technique 1 – Using the browser

Open up a web browser, and navigate to the books URL.

Testing with the web browser

You will see the list of books, displayed as XML. This is because Web API can serve up data either as XML or as JSON. Personally, I do not like XML, JSON is my choice these days.

Technique 2 (Preferred) – Using Postman

To make Web API respond in JSON we need to send along a Accept header. The best tool to enable use to do this (for Google Chrome) is Postman. Download it and give it a go if you like.

Drop the same URL into the Enter request URL field, and click Send. Notice the response is in JSON;

Postman response in JSON

This worked because Postman automatically adds the Accept header to each request. You can see this by clicking on the Headers tab. If the header isn’t there and you’re still getting XML back, just add the header as shown in the screenshot and re-send the request.

To test the delete method, change the HTTP verb to Delete and add the ReviewId to the end of the URL. For example; http://localhost:62996/api/reviews/9

Putting it all together

First, we need to restrict access to our endpoints.

Add a new file to the App_Start folder, called FilterConfig.cs and add the following code;

public class FilterConfig
{
    public static void Configure(HttpConfiguration config)
    {
        config.Filters.Add(new AuthorizeAttribute());
    }
}

And call the code from Global.asax.cs as follows;

GlobalConfiguration.Configure(FilterConfig.Configure);

Adding this code will restrict access to all endpoints (except the OAuth endpoint) to requests that have been authenticated (a request that sends along a valid Jwt).

You have much more fine-grain control here, if required. Instead of adding the above code, you could instead add the AuthorizeAttribute to specific controllers or even specific methods. The added benefit here is that you can also restrict access to specific users or specific roles;

Example code;

[Authorize(Roles = "Admin")]

The roles value (“Admin”) can be a comma-separated list. For us, restricting access to all endpoints will suffice.

To test that this code is working correctly, simply make a GET request to the books endpoint;

GET http://localhost:62996/api/books

You should get the following response;

{
  "message": "Authorization has been denied for this request."
}

Great its working. Now let’s fix that problem.

Make a POST request to the OAuth endpoint, and include the following;

  • Headers
    • Accept application/json
    • Accept-Language en-gb
    • Audience Any
  • Body
    • username administrator
    • password administrator123
    • grant_type password

Shown in the below screenshot;

OAuth Request

Make sure you set the message type as x-www-form-urlencoded.

If you are interested, here is the raw message;

POST /oauth2/token HTTP/1.1
Host: localhost:62996
Accept: application/json
Accept-Language: en-gb
Audience: Any
Content-Type: application/x-www-form-urlencoded
Cache-Control: no-cache
Postman-Token: 8bc258b2-a08a-32ea-3cb2-2e7da46ddc09

username=administrator&password=administrator123&grant_type=password

The form data has been URL encoded and placed in the message body.

The web service should authenticate the request, and return a token (Shown in the response section in Postman). You can test that the authentication is working correctly by supplying an invalid username/password. In this case, you should get the following reply;

{
  "error": "invalid_grant"
}

This is deliberately vague to avoid giving any malicious users more information than they need.

Now to get a list of books, we need to call the endpoint passing in the token as a header.

Change the HTTP verb to GET and change the URL to; http://localhost:62996/api/books.

On the Headers tab in Postman, add the following additional headers;

Authorization Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1bmlxdWVfbmFtZSI6ImFkbWluaXN0cmF0b3IiLCJzdWIiOiJhZG1pbmlzdHJhdG9yIiwicm9sZSI6IkFkbWluaXN0cmF0b3IiLCJpc3MiOiJodHRwOi8vand0YXV0aHpzcnYuYXp1cmV3ZWJzaXRlcy5uZXQiLCJhdWQiOiJBbnkiLCJleHAiOjE0NTgwNDI4MjgsIm5iZiI6MTQ1ODA0MTAyOH0.uhrqQW6Ik_us1lvDXWJNKtsyxYlwKkUrCGXs-eQRWZQ

See screenshot below;

Authorization Header

Success! We have data from our secure endpoint.

Summary

In this introduction we looked at creating a project using Web API to issue and authenticate Jwt (JSON Web Tokens). We created a simple endpoint to retrieve a list of books, and also added the ability to get a specific book/review and delete reviews in a RESTful way.

This project is the foundation for subsequent posts that will explore creating a rich client side application, using modern JavaScript frameworks, which will enable authentication and authorization.

Entity Framework – Use a Guid as the primary key

Using a Guid as your tables primary key, when using Entity Framework, requires a little more effort than when using a integer.  The setup process is straightforward, after you’ve read/been shown how to do it.

The process is slightly different for the Code First and Database First approaches.  This post discusses both techniques.

Code First

Using a Guid as the primary key when taking the code first approach is simple.  When creating your entity, add the DatabaseGenerated attribute to your primary key property, as shown below;

[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public Guid Id { get; set; }

Entity framework will create the column as you would expect, with a primary key and uniqueidentifier data type.

codefirst-defaultvalue

Click to zoom in

Also notice, very important, that the default value on the column has been set to (newsequentialid()).  This generates a new sequential (continuous) Guid for each row.  If you were so inclined, you could change this to newid()), which would result in a completely random Guid for each new row.  This will be cleared each time your database gets dropped and re-created, so this works better when taking the Database First approach.

Database First

The database first approach follows a similar line to the code first approach, but you’ll have to manually edit your model to make it work.

Ensure that you edit the primary key column and add the (newsequentialid()) or (newid()) function as the default value before doing anything.

Next, open you EDMX diagram, select the appropriate property and open the properties window.  Ensure that StoreGeneratedPattern is set to identity.

databasefirst-model

Click the image to zoom in

No need to give your entity an ID in your code, that will be populated for you automatically after the entity has been commited to the database;

using (ApplicationDbContext context = new ApplicationDbContext())
{
    var person = new Person
                     {
                         FirstName = "Random",
                         LastName = "Person";
                     };

    context.People.Add(person);
    context.SaveChanges();
    Console.WriteLine(person.Id);
}

Important Note: Your Guid field MUST be a primary key, or this does not work.  Entity Framework will give you a rather cryptic error message!

Summary

Guid (Globally Unique Identifiers) can easily be used as primary keys in Entity Framework.  A little extra effort is required to do this, depending on which approach you are taking.  When using the code first approach, add the DatabaseGenerated attribute to your key field.  When taking the Database First approach, explicitly set the StoredGeneratedPattern to Identity on your model.

Quick look: Add indexes to tables with Entity Framework 6.1 Beta 1

At the time of writing, beta 1 of Entity Framework 6.1 has recently been released.  This is mostly a maintenance release, however, there are several new feature of note, including the ability to add indexes using a new data annotation when developing using the code first approach.

Add indexes using the Index attribute

To add an index, simply add the Index attribute as required;

public class Customer
{
    [Index]
    public int Id { get; set; }
}

A quick look in SQL Server Management studio reveals that the index has been added for us.

image

The key created was non-unique, non-clustered.  There are overloads available to override these defaults;

[Index(IsClustered = false, IsUnique = true)]
public int Id { get; set; }

Which results in a unique index as shown below;

indexes

Summary

This is a fantastic new feature for potentially improving the performance of your code first applications by reducing the time required to query data.  You can add indexes to your database using the Index attribute, and override the default Unique and Clustered settings to get the index best suited to your scenario.

Resilient Connection for Entity Framework 6

Disclaimer: I cannot take a shred of credit for the code you are about to see.  I simply went over to the Entity Framework source code repository on CodePlex, copied some code, butchered it, hammered it, and rolled it until it fitted nicely into the slot that was my requirements.  Please direct your thanks to the Entity Framework team.

I had a requirement whereby I needed a resilient Entity Framework connection, and by that I mean a connection that could retry a certain number of times automatically before giving up.  After searching around I found a class, called DbExecutionStrategy, which provided exactly the functionality I needed.

The problem is, as far as I can tell, this class can only be used with the Code First approach.  So it is all but useless to me, as the project I was working on was based on the database first approach.

I directly copied the code provided in the aforementioned class, removed all the bits that didn’t interest me, and then reworked the code into a simple extension method that can be used against any DbContext;

public static class Extensions
{
    #region Constants

<pre><code>private const double DefaultExponentialBase = 2;
private const double DefaultRandomFactor = 1.1;
private const int MaxRetryCount = 5;

#endregion

#region Static Fields

private static readonly TimeSpan DefaultCoefficient = TimeSpan.FromSeconds(1);
private static readonly TimeSpan MaxDelay = TimeSpan.FromSeconds(30);

private static readonly Random _random = new Random();
private static int _attempts;

#endregion

public static void ExecuteWithRetry&amp;lt;T&amp;gt;(this T context, Action&amp;lt;T&amp;gt; operation)
    where T : DbContext
{
    while (true)
    {
        TimeSpan? delay;
        try
        {
            operation(context);
            return;
        }
        catch (Exception ex)
        {
            delay = GetNextDelay();
            if (delay == null)
            {
                throw new RetryLimitExceededException(&amp;quot;The maximum number of retries has been reached&amp;quot;, ex);
            }
        }

        Thread.Sleep(delay.Value);
    }
}

public static TResult ExecuteWithRetry&amp;lt;T, TResult&amp;gt;(this DbContext context, Func&amp;lt;T, TResult&amp;gt; operation)
    where T : DbContext
{
    while (true)
    {
        TimeSpan? delay;
        try
        {
            TResult result = operation((T)context);
            return result;
        }
        catch (Exception ex)
        {
            delay = GetNextDelay();
            if (delay == null)
            {
                throw new RetryLimitExceededException(&amp;quot;The maximum number of retries has been reached&amp;quot;, ex);
            }
        }

        Thread.Sleep(delay.Value);
    }
}

private static TimeSpan? GetNextDelay()
{
    _attempts += 1;
    if (_attempts &amp;lt; MaxRetryCount)
    {
        double delta = (Math.Pow(DefaultExponentialBase, _attempts) - 1.0) * (1.0 + _random.NextDouble() * (DefaultRandomFactor - 1.0));
        double delay = Math.Min(DefaultCoefficient.TotalMilliseconds * delta, MaxDelay.TotalMilliseconds);

        return TimeSpan.FromMilliseconds(delay);
    }

    return null;
}
</code></pre>

}

So if you want resilient connections in your project, all you need is the following;

using (AdventureWorks2012Entities context = new AdventureWorks2012Entities())
{
    var addresses = context.ExecuteWithRetry<AdventureWorks2012Entities, List<Address>>(c => c.Addresses.ToList());
    Console.WriteLine(addresses.Count());
}

The logic is simple to test, simply use SQL Server Management studio to take your database offline. The code will retry up to 5 times over a period of 30 seconds, each subsequent retry will wait slightly longer more making the next attempt.

I will refine the code over time, but for now it more than serves its purpose. Let me know if you find this helpful!

In the spotlight: Demystifying IQueryable (Entity Framework 6)

I’m not afraid to admit it. I can’t be the only one who has been confused by IQueryable when working with Entity Framework or LINQ to SQL. When using the web to research the topic, I have seen many conflicting blog posts and contradicting answers on StackOverflow. I have read various posts on MSDN about the subject, and I find them difficult to understand and incomplete. I seek to at least attempt to clarify the subject a little with this post.

Entity Framework has become such a powerful object rational mapper (ORM) that performance of desktop and web applications seems to be drifting towards the back of developers minds. Personally, I’m so used to doing things the “traditional way” (using ADO.NET SqlConnection, DbCommand etc and testing/optimizing using a combination of SQL Server Management Studio and SQL Profiler) that I find it hard to “write code and forget”. So naturally, I keep a close eye on what queries are being ran against my database.

The purpose of this blog post is to have a closer look at IQueryable, see how it is implemented in Entity Framework, what happens to our code at compile time, and understand what impact IQueryable has on our repositories.

All code samples in this post use Entity Framework 6 (although all versions behave the same) and use the Adventure Works 2012 database. Just for reference, I am also using Visual Studio 2013 and SQL Server 2012 but all should hold true with any past version. The code has not been tested with MySQL or any other database provider. Code can be downloaded at the end of this post.

Deferred execution

Before we do anything, lets get one thing straight so that we are absolutely clear on this. BOTH IQueryable and IEnumerable code expressions are deferred (lazily) executed. This means that the expression itself is created immediately, but it isn’t executed until needed (invoked or evaluated).

Example:

using (AdventureWorks2012Entities context = new AdventureWorks2012Entities())
{
    var result = context.SalesPersons.OrderByDescending(p => p.Bonus).Take(10);
    foreach (var item in result)
    {
        Console.WriteLine(item.Bonus);
    }
}

Before running the above code, set a breakpoint on line 4 and start running SQL profiler. Step through the program and you will see that result doesn’t become populated until the first iteration of the loop. If result never gets used/invoked, then the query will never be executed. Virtual navigation properties are also lazy loaded, and lazy loading can also be turned off. Both these topics are out of the scope of this post.

LINQ to Objects

First things first… so that we prevent any confusion, this post takes aim at Entity Framework and LINQ to SQL but not LINQ to Objects. Why? Well simply because (as far as I can tell) all LINQ to Objects queries return IEnumerable or a derivative of IEnumerable, such as IOrderedEnumerable or IEnumerable<IGrouping<TKey,TElement>> to name just a couple. IEnumerable expressions and executed in memory against the full dataset.

Example;

string[] colours = new[] { "Red", "Orange", "Yellow", "Green", "Blue", "Indigo", "Violet" };
IOrderedEnumerable<string> alphabetical = colours.OrderBy(t => t);

foreach (var colour in alphabetical)
{
    Console.WriteLine(colour);
}

In the above example, the entire collection of data (in this case, a series of colours) is loaded into memory (string[] colours) and then ordered, which creates a new collection of type IOrderedEnumerable<string>. The only way of returning IQueryable<string> instead of IOrderedEnumable<string> is to use the AsQueryable() extension method. There generally wouldn’t be much point in doing this, as the expression would likely never be executed by a query provider.

IQueryable, a simple example

Lets have a look at some simple code that uses IQueryable;

using (AdventureWorks2012Entities context = new AdventureWorks2012Entities())
{
    IQueryable<SalesPerson> top10SalesByBonusQueryable = context.SalesPersons.OrderByDescending(p => p.Bonus).Take(10);
    foreach (var salesPerson in top10SalesByBonusQueryable)
    {
     Console.WriteLine(salesPerson.Bonus);
    }
}

The code first orders all the sales people by the amount of bonus they have received, and then takes the top 10. Well almost but not quite. IQueryable uses a DbQueryProvider (IQueryProvider) to translate the expression (the chained extension methods) into a single database query (in this case, it generates T-SQL to run against the database). Once the query is invoked (by say, enumerating it), the query is executed against the database and the results are returned back to be consumed.

The above code generates the following T-SQL;

SELECT TOP (10) 
    [Extent1].[BusinessEntityID] AS [BusinessEntityID], 
    [Extent1].[TerritoryID] AS [TerritoryID], 
    [Extent1].[SalesQuota] AS [SalesQuota], 
    [Extent1].[Bonus] AS [Bonus], 
    [Extent1].[CommissionPct] AS [CommissionPct], 
    [Extent1].[SalesYTD] AS [SalesYTD], 
    [Extent1].[SalesLastYear] AS [SalesLastYear], 
    [Extent1].[rowguid] AS [rowguid], 
    [Extent1].[ModifiedDate] AS [ModifiedDate]
    FROM [Sales].[SalesPerson] AS [Extent1]
    ORDER BY [Extent1].[Bonus] DESC

This query is very efficient. Only 10 rows are selected (Take(10)) and the result set is ordered (OrderByDescending(p => p.Bonus)). The same query using IEnumerable would have required the entire dataset, which could consist of thousands of rows, to be preloaded into memory. Granted with servers getting evermore powerful, this isn’t as big a deal as it used to be. However, when your website or application grows or indeed you are using a service such as Windows Azure that bills you for CPU time/database usage etc, this is going to become an issue … a potentially costly one at that.

Decompiling the code (using a fantastic tool called dotPeek) reveals an interesting insight;

using (AdventureWorks2012Entities works2012Entities = new AdventureWorks2012Entities())
{
    DbSet<SalesPerson> salesPersons = works2012Entities.SalesPersons;
    Expression<Func<SalesPerson, Decimal>> keySelector = (Expression<Func<SalesPerson, Decimal>>) (p => p.Bonus);
    foreach (SalesPerson salesPerson in (IEnumerable<SalesPerson>) Queryable.Take<SalesPerson>((IQueryable<SalesPerson>) Queryable.OrderByDescending<SalesPerson, Decimal>((IQueryable<SalesPerson>) salesPersons, keySelector), 10))
        Console.WriteLine(salesPerson.Bonus);
}

Specifically notice that the Take extension method is not coming from the System.Linq.Enumerable extensions file, but instead from the System.Linq.Queryable extensions file. A small detail that is easily overlooked. Hence Queryable.Take<SalesPerson> rather than Enumerable.Take<SalesPerson>.

The conclusion we can make from the above code sample, is that Entity Framework (or indeed LINQ to SQL) is going to attempt to write the most efficient query to run against our database. By efficient, I mean a query that is going to return the smallest amount of data needed to fulfil the request.

IQueryable in Entity Framework

As part of getting to grips with IQueryable, I downloaded the Entity Framework source code from CodePlex and started examining it.

All of your queries for data when using Entity Framework are written against DbSet. For example, in the above code we are querying against the SalesPerson entity, which is exposed on our DbContext as a DbSet;

public DbSet<SalesPerson> SalesPersons { get; set; }

Closer examination of DbSet reveals that it implements IQueryable (generic and non generic). No real surprises there. You may be intrigued to know that DbSet also implements IEnumerable (generic and non generic) as well.

public class DbSet<TEntity> : DbQuery<TEntity>, IDbSet<TEntity>, IQueryable<TEntity>, IEnumerable<TEntity>, IQueryable, IEnumerable where TEntity : class
{

}

First things first, as you might expect … if you remove the IQueryable interface and implementation (generic and non generic) all your code is compiled with the IEnumerable provider/extensions. Naturally your code won’t actually do anything, as there is no query being ran against the database. But, how does the compiler decide to use IQueryable over IEnumerable? Well sorry to disappoint, but its just a case of the order in which the interfaces appear on DbSet. It’s very hard to test this theory with Entity Framework itself as IEnumerable lives in System.Collections, which I do not have the actual source code for.

A simple code sample can confirm this;

public class Program
{
    static void Main(string[] args)
    {
        var myTest = new MyTest();
        myTest.DoSomething();
    }
}

public class MyTest : IMyTest, IYourTest
{
}

public static class Extensions
{
    public static void DoSomething(this IYourTest obj)
    {
    }

    public static void DoSomething(this IMyTest obj)
    {
    }
}

Granted the code doesn’t actually compile due to the ambiguity, but switching around the interfaces on MyTest and looking at the IntelliSense for myTest.DoSomething() proves the point.

IQueryable vs IEnumerable in your Repository

One of the hottest Entity Framework related topics on the web over the past few months has been about weather or not to use IEnumerable or IQueryable on your repositories. Hopefully this analysis will help clear things up a bit.

First things first, one important goal when designing your project is to have a clear boundary between each of the areas. Your view logic should be in the view, the business logic should be in its own project, and data access code should only care about retrieving and saving data from/to the database. Its considered bad practice to have code bleed between these areas.

Take the following unit of work;

public class QueryableUnitOfWork : IDisposable
{
    private AdventureWorks2012Entities _context = new AdventureWorks2012Entities();

    public IQueryable<SalesPerson> GetTopSalesPeople()
    {
        return _context.SalesPersons.OrderByDescending(p => p.Bonus).Take(10);
    }

    public void Dispose()
    {
        Dispose(true);
        GC.SuppressFinalize(this);
    }

    public void Dispose(bool isDisposing)
    {
        _context.Dispose();
    }
}

The purpose of this code is quite simply to get the top sales people based on the amount of bonus they were paid. The method returns IQueryable, meaning that an expression will be written and translated to T-SQL later.

Now the consumer of this code can influence the final expression that is generated under the hood, so to speak, by chaining on extension methods. Take the following code;

public static void Run()
{
    using (QueryableUnitOfWork unitOfWork = new QueryableUnitOfWork())
    {
        var topSalesPeople = unitOfWork.GetTopSalesPeople();
        Console.WriteLine(topSalesPeople.Take(3).Sum(p =>; p.Bonus));
    }
}

The above code only actually cares about the top 3 sales people not the top 10. The resulting T-SQL is as follows;

SELECT 
    [GroupBy1].[A1] AS [C1]
    FROM ( SELECT 
        SUM([Limit1].[Bonus]) AS [A1]
        FROM ( SELECT TOP (3) [Extent1].[Bonus] AS [Bonus]
            FROM [Sales].[SalesPerson] AS [Extent1]
            ORDER BY [Extent1].[Bonus] DESC
        )  AS [Limit1]
    )  AS [GroupBy1]

The expression provider did a great job of interpreting the expression. It returned just the sum of the top 3 sales peoples binuses, without looking at the top 10 first .. great stuff.

So if the expression query is generating the most efficient T-SQL, whats the problem? Well, our data access logic has bled into our business logic area. The business logic changed the original intention of the data access method. Bad news.

So how to fix this? Your first thought might be to change IQueryable to IEnumerable, lets take a look at that first;

public IEnumerable<SalesPerson> GetTopSalesPeople()
{
    return _context.SalesPersons.OrderByDescending(p => p.Bonus).Take(10);
}

Its very important to understand what actually happens here. Even though your method returns IEnumerable, SalesPersons still implements IQueryable, so this part of the method is still interpreted by the expression provider into an efficient query and then the results are cast and returned as IEnumerable.

Here is the compiled code;

public IEnumerable<SalesPerson> GetTopSalesPeople()
{
  return (IEnumerable<SalesPerson>) Queryable.Take<SalesPerson>((IQueryable<SalesPerson>) Queryable.OrderByDescending<SalesPerson, Decimal>((IQueryable<SalesPerson>) this._context.SalesPersons, (Expression<Func<SalesPerson, Decimal>>) (p => p.Bonus)), 10);
}

And the resulting T-SQL;

SELECT TOP (10) 
    [Extent1].[BusinessEntityID] AS [BusinessEntityID], 
    [Extent1].[TerritoryID] AS [TerritoryID], 
    [Extent1].[SalesQuota] AS [SalesQuota], 
    [Extent1].[Bonus] AS [Bonus], 
    [Extent1].[CommissionPct] AS [CommissionPct], 
    [Extent1].[SalesYTD] AS [SalesYTD], 
    [Extent1].[SalesLastYear] AS [SalesLastYear], 
    [Extent1].[rowguid] AS [rowguid], 
    [Extent1].[ModifiedDate] AS [ModifiedDate]
    FROM [Sales].[SalesPerson] AS [Extent1]
    ORDER BY [Extent1].[Bonus] DESC

The take top 3 operation is then done in memory against the resulting IEnumerable collection of 10 sales people then the sum is performed. This, naturally is slightly less efficient than using IQueryable because you’re working against a larger collection of data. At least our original repository method executed in the way we intended.

So you may be thinking, ‘What’s the big deal? A little boundary bleed never hurt anybody…’. Well for the most part you are right, but its important to know that not every expression can be interpreted by the expression provider. The more complicated a query becomes, the more likely it’ll be that it cannot be interpreted.

There is a simple way to see this behaviour. Take the following code;

public static void Run()
{
    using (QueryableUnitOfWork unitOfWork = new QueryableUnitOfWork())
    {

        var topSalesPeople = from p in unitOfWork.GetTopSalesPeople()
                                select new
                                {
                                    Modified = string.Format(&quot;Modified on &quot; + p.ModifiedDate.ToShortDateString())
                                };

        foreach (var person in topSalesPeople)
            Console.WriteLine(person.Modified);
    }
}

Pretty straight forward code right? You simply want to tell the consumer when the top sales people were last modified (date/time). Running the above code results in the following exception;

An unhandled exception of type 'System.NotSupportedException' occurred in mscorlib.dll

Additional information: LINQ to Entities does not recognize the method 'System.String Format(System.String, System.Object[])' method, and this method cannot be translated into a store expression.

Simply put, there is not a T-SQL method thats mapped to String.Format. This problem does not exist when your repository returns IEnumerable because the query is ran in memory against the dataset rather by ran by the query provider.

On a final note, you could return the entire dataset as IEnumerable as follows;

public IEnumerable<SalesPerson> GetTopSalesPeople()
{
    return _context.SalesPersons;
}

Keep in mind though, that this will cause the entire dataset to be returned and stored in memory, which will probably be expensive in terms of CPU time and Memory usage.

Summary

IQueryable works to translate your code into the most efficient queries it can, and for the most part it does a great job. When working with the repository/unit of work pattern, you should think very hard about returning IQueryable to your consuming code, because this results in boundary bleed and can also result in exceptions, as each query interpreter cannot translate every single expression you throw at it.

Download the source code.

Exploring logging in Entity Framework 6

A project I am working on at the minute was recently upgraded to use Entity Framework 6.0.1.  There were many reasons why we wanted to do this, but one of the most compelling reasons was the new logging (SQL Interception) functionality that ships out of the box.

A colleague demonstrated this new functionality and I immediately noticed a problem (limitation if you will) that I foresaw becoming a bigger issue when the time comes to performance tune the application.

Implementing simple logging

Take the very simple code below. We tell Entity Framework about our initializer (which adds some seed data for us) and then use our context as normal (in this case simply outputting the number of customers, and then exiting). This is pretty standard stuff, except for one simple difference (notice line 20)


class Program
{
    static void Main()
    {
        Database.SetInitializer(new MyInitializer());

<pre><code>    using (LoggerDbContext context = new LoggerDbContext())
    {
        Console.WriteLine(context.Customers.Count());
    }

    Console.ReadLine();
}
</code></pre>

}

public class LoggerDbContext : DbContext
{
    public LoggerDbContext()
    {
        Database.Log = Console.WriteLine;
    }

<pre><code>public DbSet&amp;lt;Customer&amp;gt; Customers { get; set; }
</code></pre>

}

public class MyInitializer : DropCreateDatabaseAlways&lt;LoggerDbContext&gt;
{
    protected override void Seed(LoggerDbContext context)
    {
        base.Seed(context);

<pre><code>    context.Customers.Add(new Customer
    {
        FirstName = &amp;quot;Jon&amp;quot;,
        LastName = &amp;quot;Preece&amp;quot;
    });

    context.Customers.Add(new Customer
    {
        FirstName = &amp;quot;Sian&amp;quot;,
        LastName = &amp;quot;Preece&amp;quot;
    });

    context.Customers.Add(new Customer
    {
        FirstName = &amp;quot;Oliver&amp;quot;,
        LastName = &amp;quot;Preece&amp;quot;
    });
}
</code></pre>

}

public class Customer
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

Database.Log = Console.WriteLine is somewhat helpful to us. If you run the above code, you will see various SQL statements outputted to the console window, as well as execution time and/or any error information. At first glance this is all useful information, but what about when your application grows and/or becomes more complex? Well the importance of having this information stays very high, but the raw nature of the humble System.String is very restrictive.

I really want a complex object that gives me the actual DbCommand that was executed, and associated information (such as execution time) in their raw formats. I want this information so I can easily filter what information I output to my logs, say using the execution time as a filter (only log transactions taking > 50 ms). I really want to avoid having a clunky method that strips out the execution time from the System.String itself.

Writing a custom formatter

Looking at the one and only formatter, DatabaseLogFormatter, that ships with Entity Framework 6, we can see that all we need to do is implement the IDbCommandInterceptor interface. This is then passed to Entity Framework (shown shortly), which in turn calls the appropriate methods when appropriate. This is known as an interceptor, because it, for lack of a better word, intercepts various actions performed by the framework so that we can do some custom work.

The default formatter, DatabaseLogFormatter, simply takes the DbCommand and extracts the raw SQL and outputs it as a System.String. Also, it uses an System.Threading.Tasks.Stopwatch object to time how long was required to execute the command. Whilst this isn’t an exact science, its certainly a very good means of seeing how well our queries (and ultimately our application) are performing.

Here is an example formatter, called LogFormatter;

public class LogFormatter : IDbCommandInterceptor
{
    private readonly Stopwatch _stopwatch = new Stopwatch();

<pre><code>public void NonQueryExecuting(DbCommand command, DbCommandInterceptionContext&amp;lt;int&amp;gt; interceptionContext)
{
    _stopwatch.Restart();
}

public void NonQueryExecuted(DbCommand command, DbCommandInterceptionContext&amp;lt;int&amp;gt; interceptionContext)
{
    _stopwatch.Stop();
    Log(command, interceptionContext);
}

public void ReaderExecuting(DbCommand command, DbCommandInterceptionContext&amp;lt;DbDataReader&amp;gt; interceptionContext)
{
    _stopwatch.Restart();
}

public void ReaderExecuted(DbCommand command, DbCommandInterceptionContext&amp;lt;DbDataReader&amp;gt; interceptionContext)
{
    _stopwatch.Stop();
    Log(command, interceptionContext);
}

public void ScalarExecuting(DbCommand command, DbCommandInterceptionContext&amp;lt;object&amp;gt; interceptionContext)
{
    _stopwatch.Restart();
}

public void ScalarExecuted(DbCommand command, DbCommandInterceptionContext&amp;lt;object&amp;gt; interceptionContext)
{
    _stopwatch.Stop();
    Log(command, interceptionContext);
}

private void Log&amp;lt;TResult&amp;gt;(DbCommand command, DbCommandInterceptionContext&amp;lt;TResult&amp;gt; interceptionContext)
{
    Logger.Log(command, _stopwatch.ElapsedMilliseconds, interceptionContext.Exception);
}
</code></pre>

}

public static class Logger
{
    public static void Log(DbCommand command, long elapsedMilliseconds, Exception exception)
    {
        //Do something useful here with the raw data
    }
}

To begin using your formatter, simply call the static System.Data.Entity.Infrastructure.Interception.DbInterception.Add method and pass in a new instance of your formatter.

DbInterception.Add(new LogFormatter());

And, in case you’re wondering … you no longer need that pesky Database.Log code shown above, your logger will still work just fine without it.

Of course there is no better substitute for profiling the performance of a large application than using a tool such as SQL Profiler, but this is certainly a step in the right direction.

Summary

We can write our own logger (Interceptor) to format the results of on action performed by the Entity Framework (reading from, saving to the database etc) however we like, simply by implementing the IDbCommandInterceptor interface on a vanilla C# class and then passing it to the Entity Framework using the static method DbInterceptor.Add

Hopefully you will find this useful. Feel free to leave any comments/feedback below.

How to fix ‘Project failed to build’ error when adding an Entity Framework migration (add-migration)

I recently spent several hours tearing my hair out try to figure out why Entity Framework was preventing me from adding a migration to my project.

I made some very simple changes to my existing model (I’m developing using the Entity Framework Code First approach) and when trying to add a migration, I was presented with the following error message;

Project Failed To Build

But how can this be?  For you see, my project does build…

Build Succeeded

My project not only builds, but it runs and works just fine … (up until the point where Entity Framework complains about the out of date model of course!).

The solution

It was only after considerable head scratching, and tinkering, that I came across the solution.  My project specifically targets the x86  platform, as shown below;

Configuration Options

If I go ahead and change the build configuration to ‘Any CPU’ … I am greeted with a couple of dozen build errors, all relating to missing references! Once I resolved these issues, I was once again able to create migrations.

Apparently Entity Framework builds your project in ‘Any CPU’ when creating migrations (and probably when issuing other commands as well). Hopefully this will help resolve your problem as well.

WPF Entity Framework Code First migrations

Recently I wrote a WPF application using Entity Framework Code First and released it into the wild.  Shortly after, the business requirements changed and I had to make changes to the model.  Thus, I was introduced to Code First migrations.

I did a lot of research on Code First migrations and could only really find the most simple of examples, and almost robotic documentation on MSDN that provided minimal help. This post aims to provide a clearer view on migrations, and how to properly create them.

Before we can start with migrations, we need to create our normal C# code and then add in Entity Framework using NuGet.  Once done, we can create our first migration and then explore a second migration.

The Code

The code is going to revolve around the common idea of customers and orders.

C# Code Structure

Code structure; An Order has a collection of Product and a Customer, the Customer has a CustomerName. Any properties that are marked as virtual are navigation properties, and are lazy loaded by Entity Framework at runtime.

public class Customer
{
    public int Id { get; set; }
    public int CustomerNameId { get; set; }
    public virtual CustomerName CustomerName { get; set; }
}

public class CustomerName
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class Product
{
    public int Id { get; set; }
    public string Description { get; set; }
    public decimal Price { get; set; }
}

public class Order
{
    public int Id { get; set; }
    public int CustomerId { get; set; }
    public virtual Customer Customer { get; set; }
    public virtual List<Product> Products { get; set; }
    public DateTime Placed { get; set; }
}

With the code structure in place, we can now introduce Entity Framework. Go ahead and use the package manager console, or the library package manager to install Entity Framework.

install-package EntityFramework

Enabling Migrations

Before enabling migrations, ensure that you have a DbContext class that contains a DbSet for use throughout the application;

public class Context : DbContext
{
    public DbSet<Order> Orders { get; set; }
}

Entity Framework uses the DbContext class to determine the structure of your database.

The key to making migrations work properly is to create an initial migration that will set up your database into its initial state when you deploy the application.

In the package manager console window, type the following command;

Enable-Migrations

You will notice that this process has has created a new folder (Migrations), and two new files; (Configuration.cs and InitialCreate, prefixed with a Date/Time stamp)

Entity Framework Initial Create

 

 

A word about seed data

At the time of writing, the current version of Entity Framework is 5.0.0.0 (public release). I have found that seed data does not behave in the way that I expect.

Open Configuration.cs and observe the seed data method;

protected override void Seed(CodeFirstMigrations.Context context)
{
    //  This method will be called after migrating to the latest version.

<pre><code>//  You can use the DbSet&amp;lt;T&amp;gt;.AddOrUpdate() helper extension method 
//  to avoid creating duplicate seed data. E.g.
//
//    context.People.AddOrUpdate(
//      p =&amp;gt; p.FullName,
//      new Person { FullName = &amp;quot;Andrew Peters&amp;quot; },
//      new Person { FullName = &amp;quot;Brice Lambson&amp;quot; },
//      new Person { FullName = &amp;quot;Rowan Miller&amp;quot; }
//    );
//
</code></pre>

}

Note that the comment clearly states that the method will be called directly after migrating to the latest version. I’ve found that, in reality, this method is called every time your application starts up. I suppose this is why the AddOrUpdate extension method was added, to prevent duplicate seed data.

Analysing the Visual Studio IntelliSense documentation closely, its pretty clear what this method expects for the first parameter;

AddOrUpdate Extension Method

CLICK IMAGE TO ENLARGE

For the AddOrUpdate method, we must pass in a parameter that will determine if the seed data is ADDED or UPDATED. Unfortunately, this doesn’t seem to work when passing in the primary key of the table (the Id property). For me, this is a major limitation and hopefully it will be resolved in future releases.

The solution here is to skip the seed data method altogether, and add some logic in the application start-up method to determine if the seed data needs to be added or updated.

The final step is to update your initializer to use the MigrateDatabaseToLatestVersion database initializer;

public class Initializer : MigrateDatabaseToLatestVersion<Context, Configuration>
{
}

You can now deploy your application to your customers, knowing that when the customer runs the program for the first time, the database will be created as you expect and and seed data will also be added.

Your second migration

Your second migration now should be straight forward. Simply go ahead and make the desired changes to your model. We will add the following class;

[Table("CreditCardInformation")]
public class CreditCardInformation
{
    public int Id { get; set; }
    public CreditCardType CreditCardType { get; set; }
    public string CreditCardNumber { get; set; }
    public DateTime StartDate { get; set; }
    public DateTime EndDate { get; set; }
}

Note that the Table attribute used above is to ensure that Entity Frameworks pluralisation service doesn’t incorrectly name our table. Add a CreditCardInformation property to Customer.

public class Customer
{
    public int Id { get; set; }
    public int CustomerNameId { get; set; }
    public virtual CustomerName CustomerName { get; set; }
    public virtual CreditCardInformation CreditCardInformation { get; set; }
}

Now you are ready to create a migration. You can name the migration simply by passing the desired name in as a parameter.

Add-Migration AddCreditCardInformation

Your migration will look something like this;

public partial class AddCreditCardInformation : DbMigration
{
    public override void Up()
    {
        CreateTable(
            &quot;dbo.CreditCardInformation&quot;,
            c =&gt; new
                {
                    Id = c.Int(nullable: false, identity: true),
                    CreditCardType = c.Int(nullable: false),
                    CreditCardNumber = c.String(),
                    StartDate = c.DateTime(nullable: false),
                    EndDate = c.DateTime(nullable: false),
                })
            .PrimaryKey(t =&gt; t.Id);

<pre><code>    AddColumn(&amp;quot;dbo.Customers&amp;quot;, &amp;quot;CreditCardInformation_Id&amp;quot;, c =&amp;gt; c.Int());
    AddForeignKey(&amp;quot;dbo.Customers&amp;quot;, &amp;quot;CreditCardInformation_Id&amp;quot;, &amp;quot;dbo.CreditCardInformation&amp;quot;, &amp;quot;Id&amp;quot;);
    CreateIndex(&amp;quot;dbo.Customers&amp;quot;, &amp;quot;CreditCardInformation_Id&amp;quot;);
}

public override void Down()
{
    DropIndex(&amp;quot;dbo.Customers&amp;quot;, new[] { &amp;quot;CreditCardInformation_Id&amp;quot; });
    DropForeignKey(&amp;quot;dbo.Customers&amp;quot;, &amp;quot;CreditCardInformation_Id&amp;quot;, &amp;quot;dbo.CreditCardInformation&amp;quot;);
    DropColumn(&amp;quot;dbo.Customers&amp;quot;, &amp;quot;CreditCardInformation_Id&amp;quot;);
    DropTable(&amp;quot;dbo.CreditCardInformation&amp;quot;);
}
</code></pre>

}

Entity Framework gives us the ability to roll back a migration if needed. When a roll back occurs, the Down method on the migration is called and the changes are reverted.

To perform a rollback, simply run the following command;

Update-Database -TargetMigration:"NameOfMigrationToRollBackTo"

If you try to run the application at this time, you will get the following error message;

The model backing the 'Context' context has changed since the database was created.
Consider using Code First Migrations to update the database.

This is because you need to commit the changes to your database, you can do this using the following command;

Update-Database

Run the application again, and all should work as you expect.

Summary

Entity Framework Code First migrations are an exciting and powerful new feature that will dramatically simplify your deployment and update process. The MigrateDatabaseToLatestVersion database initialization strategy will take care of automatically updating your database to the most current version, and can even help with seed data. Seed data is made easy thanks to a new AddOrUpdate extension method, but is limited in that you cannot use the table primary key as the identifier.

Entity Framework Code First In 15 Minutes

Entity Framework is an Object Relational Mapper (ORM), created by Microsoft and built on top of the popular ADO.NET framework.  Entity framework was first released in the second half of 2008.  In 2011, with the release of version 4.1, a new feature was introduced… known as “Code First”.

Code first allows us to write code without having to care (to a certain extent) about the make-up of the database, its tables and their relationships.  The idea is that Entity Framework figures all this out for us, and it does a pretty good job!  Entity Framework code first is designed around the “convention over configuration” principal, meaning as long as things are named in a consistent way your code should “just work” with very little effort.  If its not possible to write your code in a way that Entity Framework understands, it is possible to tell Entity Framework about your model using attributes, the Fluent API, or a combination of both.

Adding Entity Framework to your project

The quickest way to add Entity Framework to your project is using the Package Manager Console (NuGet). In Visual Studio, click “View > Other Windows > Package Manager Console”, and type the following command;

Install-Package EntityFramework

Install Package

This process will add all the appropriate references to your project.

Super Quick Walk-through

This tutorial aims to get you up and running in about 15 minutes.  We will create a very simple application, which has a WPF front end and an Entity Framework back end, based around the idea of customers, addresses and orders.  Every customer will have an address and optionally they can have orders.

So lets get to it, the clock is ticking.  Create a new WPF project, call it SuperQuick.

SuperQuick

Add the following classes to your project;

  • Address.cs
  • Customer.cs
  • MainWindowViewModel.cs (Model class for our view, as per the MVVM specification)
  • SuperQuickContext.cs (This will be our database context)
  • SuperQuickInitializer.cs (This will define the database initialization strategy)

Add the following code to Customer.cs

public class Customer
{
    public Customer()
    {
        Orders = new List&lt;Order&gt;();
    }

<pre><code>public int Id { get; set; }

public string FirstName { get; set; }

public string LastName { get; set; }

public Address Address { get; set; }

public virtual List&amp;lt;Order&amp;gt; Orders { get; set; }

public override string ToString()
{
    return string.Format(&amp;quot;{0}, {1}&amp;quot;, LastName.ToUpper(), FirstName);
}
</code></pre>

}

Lets take a moment out to look at this code.  Customer will be mapped to its own table in your database, and Id will be considered the primary key.  Notice that we have a property called Address of type Address.  Address is a complex type, meaning it consists of multiple fields/properties.

How will this be mapped to the database?  Well Entity Framework will analyse the class and create a column for each publicly accessible property.  The database table will eventually look like this (once we flesh out the Address class);

Complex Type

Notice the naming of each column.  This is significant because Entity Framework uses the column name to match the value back to each property on your class.  Also note the data type for each column.  You can explicitly dictate what data type you want to use (perhaps NVARCHAR(255) for a string no longer than 255 characters) using a combination of attributes and the fluent API.  We may cover these topics in more detail in another blog post.

Also notice that we have a List of type Order placed by the company.  This property basically has a one to many relationship with customer, as a customer can have none, one, or many orders.  The property is marked as virtual so that Entity Framework can create a dynamic proxy for the property, which enables for lazy loading and change tracking.  This may, when used right, improve performance by only retrieving data when needed.

Add the following to Address.cs

public class Address
{
    public string Line1 { get; set; }

<pre><code>public string Line2 { get; set; }

public string Town { get; set; }

public string PostCode { get; set; }

public override string ToString()
{
    return string.Format(&amp;quot;{0}n{1}n{2}n{3}&amp;quot;, Line1, Line2, Town, PostCode);
}
</code></pre>

}

Next, we need to create a database context (DbContext).  We will use the DbContext to tell Entity Framework about our model classes.

Add the following code to SuperQuickContext.cs

public class SuperQuickContext : DbContext
{
    public DbSet<Customer> Customers { get; set; }
}

Notice that we haven’t created a DbSet for the Address or Order classes?  Well Entity Framework detects that Customer has a dependency on these classes, so it just includes them automatically for us.  Adding a DbSet for the Order class wouldn’t affect the model, but adding a DbSet for Address would result in an error, because Address doesn’t have a suitable primary key property.

Next, we need to decide which database initialization strategy we want to use.  This tells Entity Framework what to do with our database if our model changes (or the database doesn’t exist).  When developing our project, its fine to use the  DropCreateDatabaseIfModelChanges or DropCreateDatabaseAlways strategy.  When moving to a release, you may want to disable database initialization and make use of a feature of Entity Framework called “Code First Migrations” … or you will lose all of your data!

For us, there is no need to recreate the database unless the model changes, so we will use DropCreateDatabaseIfModelChanges.

We could override the OnStartup method in App.xaml.cs and set our initializer;

protected override void OnStartup(StartupEventArgs e)
{
    base.OnStartup(e);

<pre><code>Database.SetInitializer(new DropCreateDatabaseIfModelChanges());
</code></pre>

}

But actually this is not quite what we want. We want to introduce seed data to set some “default values” in our database when it gets created or dropped and re-created.

Add the following code to SuperQuickInitializer.cs

public class SuperQuickInitializer : DropCreateDatabaseIfModelChanges&lt;SuperQuickContext&gt;
{
    protected override void Seed(SuperQuickContext context)
    {
        //Create some dummy data
        Address addressOne = new Address
        {
            Line1 = &quot;Address Line 1&quot;,
            Line2 = &quot;Address Line 2&quot;,
            PostCode = &quot;AB1 ABC&quot;,
            Town = &quot;The Town&quot;
        };

<pre><code>    Address addressTwo = new Address
    {
        Line1 = &amp;quot;Second Address 1&amp;quot;,
        Line2 = &amp;quot;Second Address 2&amp;quot;,
        PostCode = &amp;quot;DE2 DEF&amp;quot;,
        Town = &amp;quot;Second Town&amp;quot;
    };

    Customer customerOne = new Customer
    {
        Address = addressOne,
        FirstName = &amp;quot;Jon&amp;quot;,
        LastName = &amp;quot;Preece&amp;quot;,
    };

    Customer customerTwo = new Customer
    {
        Address = addressTwo,
        FirstName = &amp;quot;Mike&amp;quot;,
        LastName = &amp;quot;Smith&amp;quot;
    };

    Order order = new Order
    {
        Amount = 10,
        Item = &amp;quot;Mouse&amp;quot;
    };

    Order orderTwo = new Order
    {
        Amount = 20,
        Item = &amp;quot;Keyboard&amp;quot;
    };

    Order orderThree = new Order
    {
        Item = &amp;quot;Monitor&amp;quot;,
        Amount = 100
    };

    customerOne.Orders.Add(order);
    customerTwo.Orders.AddRange(new[] { orderTwo, orderThree });

    //Add to the context
    context.Customers.Add(customerOne);
    context.Customers.Add(customerTwo);

    //Save changes
    context.SaveChanges();
}
</code></pre>

}

Note we have inherited from DropCreateDatabaseIfModelChanges so that we get all the behaviour, whist being able to insert our own seed data.

Now flip back over to App.xaml.cs and add the following code;

protected override void OnStartup(StartupEventArgs e)
{
    base.OnStartup(e);

<pre><code>Database.SetInitializer(new SuperQuickInitializer());
</code></pre>

}

User Interface

We’re just about done with setting up our model, so now lets turn our attention to the user interface.  First set the MainWindow data context to an instance of MainWindowViewModel. Switch over to MainWindow.xaml and add the following code underneath the opening Window tag; (be sure to add an xml namespace, called local)

<Window.DataContext>
    <local:MainWindowViewModel />
</Window.DataContext>

Replace the existing Grid control with the following code;

<Grid>
    <ScrollViewer>
        <ItemsControl ItemsSource="{Binding Customers}"
                        AlternationCount="2"
                        ScrollViewer.CanContentScroll="True">
            <ItemsControl.ItemTemplate>
                <DataTemplate>
                    <StackPanel x:Name="Main">
                        <TextBlock Text="{Binding}"
                                    FontWeight="Bold"
                                    FontSize="14"
                                    Padding="10,10,0,0" />
                        <TextBlock Text="{Binding Address}"
                                    FontSize="12"
                                    Padding="10,0,0,10" />
                        <StackPanel Orientation="Horizontal">
                            <TextBlock Text="Orders:"
                                        Margin="0,0,5,0"
                                        Padding="10,0,0,10"
                                        FontWeight="Bold" />
                            <TextBlock Text="{Binding Orders.Count}" 
                                        Padding="0,0,0,10"/>
                        </StackPanel>
                    </StackPanel>
                    <DataTemplate.Triggers>
                        <Trigger Property="ItemsControl.AlternationIndex"
                                    Value="0">
                            <Setter TargetName="Main"
                                    Property="Background"
                                    Value="#220000FF" />
                        </Trigger>
                        <Trigger Property="ItemsControl.AlternationIndex"
                                    Value="1">
                            <Setter TargetName="Main"
                                    Property="Background"
                                    Value="White" />
                        </Trigger>
                    </DataTemplate.Triggers>
                </DataTemplate>
            </ItemsControl.ItemTemplate>
        </ItemsControl>
    </ScrollViewer>
</Grid>

This will display our customers, their address, and the number of orders they have placed.

To satisfy the XAML, we need to add some code to our view-model … as follows;

public ObservableCollection<Customer> Customers { get; set; }

Finally, we need to populate the ObservableCollection with code from our database. Add the following constructor to the class;

public MainWindowViewModel()
{
    var context = new SuperQuickContext();

<pre><code>Customers = new ObservableCollection(context.Customers);
</code></pre>

}

 

Summary

In this super quick introduction, we have seen how easy it is to create an application that persists data to a database using Entity Framework. We have touched on topics such as complex types, navigation properties database initializers. Subsequent posts will cover more advanced topics such as data annotations, the Fluent API, and code first migrations.