.NET developers, utilize Git better using the command prompt

I’ve been using Git in Visual Studio for quite some time now (since not long after it was released) and I’ve really grown to like it.  I particularly like the speed and general ease of use.  I find TFS/TFVC (weather that be a local instance or TFS online) to be slow and unreliable and it really has poor command line support (sorry guys!). It’s fair to say that I’ve been a little bit intimidated by Git and have rarely (OK never!) strayed away from the UI available through Visual Studio.  Until now.

Visual Studio barely scratches the surface when it comes to the wealth of additional functionality available via the command prompt.  This post looks at some of the most commonly used commands (and how to use them).

10 Git commands for .NET developers

The easiest way to get started is;

  1. Open Visual Studio (and ideally open/create a project)
  2. Open the Team Explorer
  3. Click Changes, and then from the Actions drop down click Open Command Prompt.

VisualStudio

As an additional win, you can also bring up extensive formal documentation about any Git command as follows;

git help {command}

Terminology Comparison

There are some slight differences in terminology between Git and TFVC, here are some of them;

Git

TFVC

Commit

Commit

Clone

Snapshot

Commit id (the SHA1 checksum)

HEAD

Repository

Author / Contributor

Fork (take a copy of)

Changeset

Check In

Map

 

Changeset number (sequential number)

Current branch

Team Project

User

Branch

Git Add

AddedReadMeAdds a given file to the tracked (uncommitted) list, or adds all untracked files under the given directory.  By default, new files are not tracked by Git, they have to be added to the index.

Scenario

  1. Add a new file to the root directory called ReadMe
    2. Run git add ReadMe
  2. File is added to the Included Changes list
  3. Alternatively, run git add . to add all untracked files in the current directory (recursively scans through each sub directory)

 

Git Branch

Same concept as in TFVC, you create branches when;

  1. You create branches to make it easier when large teams are developing new functionality
  2. You are releasing the code to live or a test environment
  3. You want to fix bugs and you hate the world

There are a few useful parameters you can pass in to git branch here;

  1. {name-of-new-branch} if you just want to create a new branch
  2. --list, or no parameters, shows a list of current branches
  3. -m {new-name} renames the branch
  4. -d deletes the branch

Example:

git branch {name-of-new-branch}

 

Git Checkout

Switches the current development branch to the branch specified;

git checkout {some-branch}

All new changes will be committed to this branch.  If the branch does not already exist, you can create it and check it out using a single command;

git checkout –b {some-branch}

Can also be used to rollback to previous versions of files or specific branches as follows;

git checkout {commit-number} {file-name}

{commit-number} the specific changeset to roll back to
{file-name} optional, used to roll back a specific file

 

Git Clone

Clones (gets a copy of) the specified repository.  This example is using Visual Studio online, but should work with self-hosted instances of TFS (simply change the domain to that of your TFS instance, or the IP address of your TFS instance);

git clone https://{your-user-name}.visualstudio.com/DefaultCollection/_git/{project-name} {local-path}

Replace {your-user-name} with your TFS username and {project-name} with the name of the project you want to clone.  {local-path} is the location on your hard drive where you want to store the cloned files.

When you successfully connect, you will be prompted to enter your user name and password.  When the clone starts, you’ll will see some neat ASCII art and the progress of the operation;

GitClone

 

Git Commit

Takes a snapshot (also known as a revision) of the file/folder at the current point in time.  A snapshot cannot easily be edited.  You can roll back to specific snapshots as required.

Before you can take a snapshot, you must stage that changes that will be included.  This is done using git add (see above).  To include all files in the snapshot, use git add ..

There are a lot of options you can pass to this command, but the one you will use most often is –m, which allows you to pass a message to use to identify the commit at a later time;

git commit –m "{your-message-here}"

Git will display the first part of the commit identifier, which can be helpful for use later (rolling back the commit for example using the git reset command).  You can view a recent list of commits using the git log command.

image

 

Git Diff

Used to display the differences between the current directory/file and either the index or the most recent commit.

To display all the differences (including renames, deletes, adds, modifications etc.) against the index;

git diff

To display all the differences against the most recent commit;

git diff HEAD

Diff

The standard diff tool will display differences inline (red is the indexed/HEAD copy, green is the local copy).  To be honest, I don’t find this particularly helpful … especially when you are working with lots of differences.

Custom Diff Tool

You can easily configure Git to use a custom tool.  I use Beyond Compare (because I love it), but you can use whatever tool you like.  For a full explanation on how to configure Beyond Compare, see Git for Windows on the Scooter Software website.

Use the git config command to change the global diff setting;

git config --global diff.tool bc3
git config --global difftool.bc3.path "c:/Program Files (x86)/Beyond Compare 4/bcomp.exe"

You can also change the configuration for merging, which goes like this;

git config --global merge.tool bc3
git config --global mergetool.bc3.path "c:/Program Files (x86)/Beyond Compare 4/bcomp.exe"

To run the diff in the custom tool, run git difftool {file-to-diff}

BComp

I’m much more comfortable working with a proper GUI as I’m so used to using Visual Studio’s built in diff tool (which is also great).

 

Git Log

Displays recent commits to the current branch.  Simple but very useful command that shows a detailed list of revisions, including the SHA1 checksum, the author/contributor, the commit date and the commit message.

git log –n {number-to-display}

GitLog

The –n option is optional, but its useful as you could end up with a big list if you work with a larger team.

 

Git Pull

Grabs (pulls) the remote repository and merges it into the local copy in a single command. Git pull is actually a short-hand for git fetch followed by git merge.  To pull in all changes that have been made by other collaborators, you should use the git pull command.

git pull {remote-path}

When working with TFS, again the {remote-path} is that of the project to pull, for example;

https://{your-user-name}.visualstudio.com/DefaultCollection/_git/{project-name}

Pull Requests

A pull request is a mechanism for developers to request that their changes be merged in to the main branch.

Example: When using services such as GitHub or BitBucket, public collaboration and participation is possible and encouraged due to the open source nature of the code.  Anybody can come along, fork the repository (effectively creating a copy under their own user account) and start making modifications.  A pull request is simply a means of notifying the original owner that you want them to review your changes and ultimately merge them back to the master branch.  This way anybody can make changes to a project whilst the owner maintains overall control.

 

Git Push

All changes you make are stored in your local environment until you push them to the remote repository.  This process synchronises your local repository with the remote repository.

The basic push command;

git push {remote-path} {branch-name}

Again the {remote-path} option is that of the repository in TFS and {branch-name} is the name of the branch to push.  This operation will push your commits to the remote repository.

In the event of merge conflicts, Git will tell you and give you the opportunity to resolve conflicts when appropriate.

To force the push, use the --force option.  To push all branches, use the --all option.

 

Git Status

Shows the differences between the local copy and the staging area.  Can show lists of files that added/modified/deleted.  Also shows which files are untracked (files that need adding using git add).

Usage;

git status

Not to be confused with git log, which effectively shows you the history of the branch.

Summary

Visual Studio only scratches the surface when it comes to utilising functionality available when using Git. Whilst almost all of the above commands can be executed using Visual Studio, this is the first step on the way to moving away from that.  Once you start to feel more comfortable, you can start reading the documentation and experimenting with new commands and options (and believe me, there are thousands) that are not available via the UI.

Quick tip: Avoid ‘async void’

When developing a Web API application recently with an AngularJS front end, I made a basic mistake and then lost 2 hours of my life trying to figure out what was causing the problem … async void.

Its pretty common nowadays to use tasks to improve performance/scalability when writing a Web API controller.  Take the following code:

public async Task<Entry[]> Get()
{
    using (var context = new EntriesContext())
    {
        return await context.Entries.ToArrayAsync();
    }
}

At a high level, when ToArrayAsync is executed the call will be moved off onto another thread and the execution of the method will only continue once the operation is complete (when the data is returned from the database in this case).  This is great because it frees up the thread for use by other requests, resulting in better performance/scalability (we could argue about how true this is all day long, so lets not do this here! Smile).

So what about when you still want to harness this functionality, but you don’t need to return anything to the client? async void? Not quite

Take the following Delete method:

public async void Delete(int id)
{
    using (var context = new EntriesContext())
    {
        Entry entity = await context.Entries.FirstOrDefaultAsync(c => c.Id == id);
        if (entity != null)
        {
            context.Entry(entity).State = EntityState.Deleted;
            await context.SaveChangesAsync();
        }
    }
}

The client uses the Id property to do what it needs to do, so it doesn’t care what actually gets returned…as long as the operation (deleting the entity) completes successfully.

To help illustrate the problem, here is the client side code (written in AngularJS, but it really doesn’t matter what the client side framework is);

$scope.delete = function () {

<pre><code>var entry = $scope.entries[0];

$http.delete('/api/Entries/' + entry.Id).then(function () {
    $scope.entries.splice(0, 1);
});
</code></pre>

};

When the delete operation is completed successfully (i.e. a 2xx response code), the then call-back method is raised and the entry is removed from the entries collection.  Only this code never actually runs.  So why?

If you’re lucky, your web browser will give you a error message to let you know that something went wrong…

browser-error

I have however seen this error get swallowed up completely.

To get the actual error message, you will need to use a HTTP proxy tool, such as Fiddler.  With this you can capture the response message returned by the server, which should look something like this (for the sake of clarity I’ve omitted all the HTML code which collectively makes up the yellow screen of death);

An asynchronous module or handler completed while an asynchronous operation was still pending.

Yep, you have a race condition.  The method returned before it finished executing.  Under the hood, the framework didn’t create a Task for the method because the method does not return a Task.  Therefore when calling FirstOrDefaultAsync, the method does not pause execution and the error is encountered.

To resolve the problem, simply change the return type of the method from void to Task.  Don’t worry, you don’t actually have to return anything, and the compiler knows not to generate a build error if there is no return statement.  An easy fix, when you know what the problem is!

Summary

Web API fully supports Tasks, which are helpful for writing more scalable applications.  When writing methods that don’t need to return a value to the client, it may make sense to return void.  However, under the hood .NET requires the method to return Task in order for it to properly support asynchronous  functionality.

AutoMapper

5 AutoMapper tips and tricks

AutoMapper is a productivity tool designed to help you write less repetitive code mapping code. AutoMapper maps objects to objects, using both convention and configuration.  AutoMapper is flexible enough that it can be overridden so that it will work with even the oldest legacy systems.  This post demonstrates what I have found to be 5 of the most useful, lesser known features.

Tip: I wrote unit tests to demonstrate each of the basic concepts.  If you would like to learn more about unit testing, please check out my post C# Writing Unit Tests with NUnit And Moq.

Demo project code

This is the basic structure of the code I will use throughout the tutorial;

public class Doctor
{
    public int Id { get; set; }
    public string Title { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class HealthcareProfessional
{
    public string FullName { get; set; }
}

public class Person
{
    public string Title { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public class KitchenCutlery
{
    public int Knifes { get; set; }
    public int Forks { get; set; }
}

public class Kitchen
{
    public int KnifesAndForks { get; set; }
}

public class MyContext : DbContext
{
    public DbSet<Doctor> Doctors { get; set; }
}

public class DbInitializer : DropCreateDatabaseAlways<MyContext>
{
    protected override void Seed(MyContext context)
    {
        context.Doctors.Add(new Doctor
        {
            FirstName = "Jon",
            LastName = "Preece",
            Title = "Mr"
        });
    }
}

I will refer back to this code in each example.

AutoMapper Projection

No doubt one of the best, and probably least used features of AutoMapper is projection.  AutoMapper, when used with an Object Relational Mapper (ORM) such as Entity Framework, can cast the source object to the destination type at database level. This may result in more efficient database queries.

AutoMapper provides the Project extension method, which extends the IQueryable interface for this task.  This means that the source object does not have to be fully retrieved before mapping can take place.

Take the following unit test;

[Test]
public void Doctor_ProjectToPerson_PersonFirstNameIsNotNull()
{
    //Arrange
    Mapper.CreateMap<Doctor, Person>()
            .ForMember(dest => dest.LastName, opt => opt.Ignore());

    //Act
    Person result;
    using (MyContext context = new MyContext())
    {
        context.Database.Log += s => Debug.WriteLine(s);
        result = context.Doctors.Project().To<Person>().FirstOrDefault();
    }

    //Assert
    Assert.IsNotNull(result.FirstName);
}

The query that is created and executed against the database is as follows;

SELECT TOP (1) 
    [d].[Id] AS [Id], 
    [d].[FirstName] AS [FirstName]
    FROM [dbo].[Doctors] AS [d]

Notice that LastName is not returned from the database?  This is quite a simple example, but the potential performance gains are obvious when working with more complex objects.

InstantAutoMapperRecommended Further Reading: Instant AutoMapper

Automapper is a simple library that will help eliminate complex code for mapping objects from one to another. It solves the deceptively complex problem of mapping objects and leaves you with clean and maintainable code.

Instant Automapper Starter is a practical guide that provides numerous step-by-step instructions detailing some of the many features Automapper provides to streamline your object-to-object mapping. Importantly it helps in eliminating complex code.

Configuration Validation

Hands down the most useful, time saving feature of AutoMapper is Configuration Validation.  Basically after you set up your maps, you can call Mapper.AssertConfigurationIsValid() to ensure that the maps you have defined make sense.  This saves you the hassle of having to run your project, navigate to the appropriate page, click button A/B/C and so on to test that you mapping code actually works.

Take the following unit test;

[Test]
public void Doctor_MapsToHealthcareProfessional_ConfigurationIsValid()
{
    //Arrange
    Mapper.CreateMap<Doctor, HealthcareProfessional>();

    //Act

    //Assert
    Mapper.AssertConfigurationIsValid();
}

AutoMapper throws the following exception;

AutoMapper.AutoMapperConfigurationException : 
Unmapped members were found. Review the types and members below.
Add a custom mapping expression, ignore, add a custom resolver, or modify the source/destination type
===================================================================
Doctor -> HealthcareProfessional (Destination member list)
MakingLifeEasier.Doctor -> MakingLifeEasier.HealthcareProfessional (Destination member list)
-------------------------------------------------------------------
FullName

AutoMapper can’t infer a map between Doctor and HealthcareProfessional because they are structurally very different.  A custom converter, or ForMember needs to be used to indicate the relationship;

[Test]
public void Doctor_MapsToHealthcareProfessional_ConfigurationIsValid()
{
    //Arrange
    Mapper.CreateMap<Doctor, HealthcareProfessional>()
          .ForMember(dest => dest.FullName, opt => opt.MapFrom(src => string.Join(" ", src.Title, src.FirstName, src.LastName)));

    //Act

    //Assert
    Mapper.AssertConfigurationIsValid();
}

The test now passes because every public property now has a valid mapping.

Custom Conversion

Sometimes when the source and destination objects are too different to be mapped using convention, and simply too big to write elegant inline mapping code (ForMember) for each individual member, it can make sense to do the mapping yourself.  AutoMapper makes this easy by providing the ITypeConverter<TSource, TDestination> interface.

The following is an implementation for mapping Doctor to a HealthcareProfessional;

public class HealthcareProfessionalTypeConverter : ITypeConverter<Doctor, HealthcareProfessional>
{
    public HealthcareProfessional Convert(ResolutionContext context)
    {
        if (context == null || context.IsSourceValueNull)
            return null;

        Doctor source = (Doctor)context.SourceValue;

        return new HealthcareProfessional
        {
            FullName = string.Join(" ", new[] { source.Title, source.FirstName, source.LastName })
        };
    }
}

You instruct AutoMapper to use your converter by using the ConvertUsing method, passing the type of your converter, as shown below;

[Test]
public void Legacy_SourceMappedToDestination_DestinationNotNull()
{
    //Arrange
    Mapper.CreateMap<Doctor, HealthcareProfessional>()
            .ConvertUsing<HealthcareProfessionalTypeConverter>();

    Doctor source = new Doctor
    {
        Title = "Mr",
        FirstName = "Jon",
        LastName = "Preece",
    };

    Mapper.AssertConfigurationIsValid();

    //Act
    HealthcareProfessional result = Mapper.Map<HealthcareProfessional>(source);

    //Assert
    Assert.IsNotNull(result);
}

AutoMapper simply hands over the source object (Doctor) to you, and you return a new instance of the destination object (HealthcareProfessional), with the populated properties.  I like this approach because it means I can keep all my monkey mapping code in one single place.

Value Resolvers

Value resolves allow for correct mapping of value types.  The source object KitchenCutlery contains a precise breakdown of the number of knifes and forks in the kitchen, whereas the destination object Kitchen only cares about the sum total of both.  AutoMapper won’t be able to create a convention based mapping here for us, so we use a Value (type) Resolver;

public class KitchenResolver : ValueResolver<KitchenCutlery, int>
{
    protected override int ResolveCore(KitchenCutlery source)
    {
        return source.Knifes + source.Forks;
    }
}

The value resolver, similar to the type converter, takes care of the mapping and returns a result, but notice that it is specific to the individual property, and not the full object.

The following code snippet shows how to use a Value Resolver;

[Test]
public void Kitchen_KnifesKitchen_ConfigurationIsValid()
{
    //Arrange

    Mapper.CreateMap<KitchenCutlery, Kitchen>()
            .ForMember(dest => dest.KnifesAndForks, opt => opt.ResolveUsing<KitchenResolver>());

    //Act

    //Assert
    Mapper.AssertConfigurationIsValid();
}

Null Substitution

Think default values.  In the event that you want to give a destination object a default value when the source value is null, you can use AutoMapper’s NullSubstitute feature.

Example usage of the NullSubstitute method, applied individually to each property;

[Test]
public void Doctor_TitleIsNull_DefaultTitleIsUsed()
{
    //Arrange
    Doctor source = new Doctor
    {
        FirstName = "Jon",
        LastName = "Preece"
    };

    Mapper.CreateMap<Doctor, Person>()
            .ForMember(dest => dest.Title, opt => opt.NullSubstitute("Dr"));

    //Act
    Person result = Mapper.Map<Person>(source);

    //Assert
    Assert.AreSame(result.Title, "Dr");
}

Summary

AutoMapper is a productivity tool designed to help you write less repetitive code mapping code.  You don’t have to rewrite your existing code or write code in a particular style to use AutoMapper, as AutoMapper is flexible enough to be configured to work with even the oldest legacy code.  Most developers aren’t using AutoMapper to its full potential, rarely straying away from Mapper.Map.  There are a multitude of useful tidbits, including; Projection, Configuration Validation, Custom Conversion, Value Resolvers and Null Substitution, which can help simplify complex logic when used correctly.

How to create your own ASP .NET MVC model binder

Model binding is the process of converting POST data or data present in the Url into a .NET object(s).  ASP .NET MVC makes this very simple by providing the DefaultModelBinder.  You’ve probably seen this in action many times (even if you didn’t realise it!), but did you know you can easily write your own?

A typical ASP .NET MVC Controller

You’ve probably written or seen code like this many hundreds of times;

public ActionResult Index(int id)
{
    using (ExceptionManagerEntities context = new ExceptionManagerEntities())
    {
        Error entity = context.Errors.FirstOrDefault(c => c.ID == id);

<pre><code>    if (entity != null)
    {
        return View(entity);                    
    }
}

return View();
</code></pre>

}

Where did Id come from? It probably came from one of three sources; the Url (Controller/View/{id}), the query string (Controller/View?id={id}), or the post data.  Under the hood, ASP .NET examines your controller method, and searches each of these places looking for data that matches the data type and the name of the parameter.  It may also look at your route configuration to aid this process.

A typical controller method

The code shown in the first snippet is very common in many ASP .NET MVC controllers.  Your action method accepts an Id parameter, your method then fetches an entity based on that Id, and then does something useful with it (and typically saves it back to the database or returns it back to the view).

You can create your own MVC model binder to cut out this step, and simply have the entity itself passed to your action method. 

Take the following code;

public ActionResult Index(Error error)
{
    if (error != null)
    {
        return View(error);
    }

<pre><code>return View();
</code></pre>

}

How much sweeter is that?

Create your own ASP .NET MVC model binder

You can create your own model binder in two simple steps;

  1. Create a class that inherits from DefaultModelBinder, and override the BindModel method (and build up your entity in there)
  2. Add a line of code to your Global.asax.cs file to tell MVC to use that model binder.

Before we forget, tell MVC about your model binder as follows (in the Application_Start method in your Global.asax.cs file);

ModelBinders.Binders.Add(typeof(Error), new ErrorModelBinder());

This tells MVC that if it stumbles across a parameter on an action method of type Error, it should attempt to bind it using the ErrorModelBinder class you just created.

Your BindModel implementation will look like this;

public override object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
{
    if (bindingContext.ModelType == typeof(Error))
    {
        ValueProviderResult valueProviderValue = bindingContext.ValueProvider.GetValue("id");

<pre><code>    int id;
    if (valueProviderValue != null &amp;&amp; int.TryParse((string)valueProviderValue.RawValue, out id))
    {
        using (ExceptionManagerEntities context = new ExceptionManagerEntities())
        {
            return context.Errors.FirstOrDefault(c =&gt; c.ID == id);
        }
    }
}

return base.BindModel(controllerContext, bindingContext);
</code></pre>

}

The code digested;

  1. Make sure that we are only trying to build an object of type Error (this should always be true, but just as a safety net lets include this check anyway).
  2. Get the ValueProviderResult of the value provider we care about (in this case, the Id property).
  3. Check that it exists, and that its definitely an integer.
  4. Now fetch our entity and return it back.
  5. Finally, if any of our safety nets fail, just return back to the model binder and let that try and figure it out for us.

And the end result?

ErrorIsBound

Your new model binder can now be used on any action method throughout your ASP .NET MVC application.

Summary

You can significantly reduce code duplication and simplify your controller classes by creating your own model binder.  Simply create a new class that derives from DefaultModelBinder and add your logic to fetch your entity.  Be sure to add a line to your Global.asax.cs file so that MVC knows what to do with it, or you may get some confusing error messages.

Moq and NUnit – Abstract and interface types

Effectively unit testing code using Moq and NUnit is a breeze and a pleasure.  If you’re not currently unit testing your code, and you’re interested in getting started, please take a look at my C# Writing unit tests with NUnit and Moq tutorial.

Mocking interfaces and abstract classes using Moq is no more complicated than mocking any other type.  There are just a couple of things to look out for.

Mocking Interfaces

Assume the following interface;

public interface IVehicle
{
    int BHP { get; set; }
    bool HasWheels { get; }
    int Wheels { get; }

    bool Move();
}

And the following unit test;

[Test]
public void IVehicle_Move()
{
    Mock vehicle = new Mock();

    int wheels = vehicle.Object.Wheels;

    Assert.IsTrue(wheels == 0);
}

As far as I know, there is no way to specify a concrete implementation to use when mocking an interface.  By default, Moq will return the default value for each property on the interface, and does nothing when void methods are execute.  If you want to override this behaviour, you must tell Moq what to do when the property/method is accessed;

[Test]
public void IVehicle_Move()
{
    Mock vehicle = new Mock();

    vehicle.Setup(t => t.Wheels).Returns(4);
    vehicle.Setup(t => t.Move()).Callback(() => Console.WriteLine("Move was called"));

    int wheels = vehicle.Object.Wheels;

    Assert.IsTrue(wheels == 0);
    vehicle.Verify(t => t.Move(), Times.Exactly(1));
}

The above test obviously fails miserably, but this is just a contrived example to make the point.  You use the Setup method on your mock object with the Callback method to override the default behaviour.

Abstract Classes

Abstract classes are subtly different.  Take the following abstract class;

public abstract class Vehicle : IVehicle
{
    public int BHP { get; set; }

    public bool HasWheels
    {
        get
        {
            return Wheels > 0;
        }
    }

    public abstract int Wheels { get; }

    public string WhoYouGonnaCall
    {
        get
        {
            return "Ghostbusters";
        }
    }

    public abstract bool Move();
}

The class itself is marked as abstract, meaning it cannot be directly instantiated.  The class contains an mix of abstract methods/properties and non-abstract properties.

Assuming the following unit test;

[Test]
public void Vehicle_Move()
{
    Mock vehicle = new Mock();

    int wheels = vehicle.Object.Wheels;

    Assert.IsTrue(wheels == 0);
}

As Wheels is abstract, it has no direct implementation, there Moq will return the default value of the properties data type (Int32, default value of 0).  However, the property WhoYouGonnaCall is not abstract, meaning it can be intercepted.  Take the following test;

[Test]
public void Vehicle_WhoYouGonnaCall()
{
    Mock vehicle = new Mock();

    string gonnaCall = vehicle.Object.WhoYouGonnaCall;

    Assert.AreEqual(gonnaCall, "Ghostbusters");
}

The property WhoYouGonnaCall is not mocked and its original value is returned rather than the default value of string.

Summary

Moq can easily be used to unit test abstract and interface types.  The process is the same as mocking any other type, just with subtle differences in behaviour to look out for.