Angular 2 server side paging using ng2-pagination

Angular 2 is not quite out of beta yet (Beta 12 at the time of writing) but I’m in the full flow of developing with it for production use. A common feature, for good or bad, is to have lists/tables of data that the user can navigate through page by page, or even filter, to help find something useful.

Angular 2 doesn’t come with any out of the box functionality to support this, so we have to implement it ourselves. And of course what the means today is to use a third party package!

To make this happen, we will utilise n2-pagination, a great plugin, and Web API.

I’ve chosen Web API because that is what I’m using in my production app, but you could easily use ExpressJS or (insert your favourite RESTful framework here).

Checklist

Here is a checklist of what we will do to make this work;

  • Create a new Web API project (you could very easily use an existing project)
  • Enable CORS, as we will use using a seperate development server for the Angular 2 project
  • Download the Angular 2 quick start, ng2-pagination and connect the dots
  • Expose some sample data for testing

I will try to stick with this order.

Web API (for the back end)

Open up Visual Studio (free version here) and create a new Web API project. I prefer to create an Empty project and add Web API.

Add a new controller, called DataController and add the following code;

public class DataModel
{
    public int Id { get; set; }
    public string Text { get; set; }
}

[RoutePrefix("api/data")]
public class DataController : ApiController
{
    private readonly List<DataModel> _data;

    public DataController()
    {
        _data = new List<DataModel>();

        for (var i = 0; i < 10000; i++)
        {
            _data.Add(new DataModel {Id = i + 1, Text = "Data Item " + (i + 1)});
        }
    }

    [HttpGet]
    [Route("{pageIndex:int}/{pageSize:int}")]
    public PagedResponse<DataModel> Get(int pageIndex, int pageSize)
    {
        return new PagedResponse<DataModel>(_data, pageIndex, pageSize);
    }
}

We don’t need to connect to a database to make this work, so we just dummy up 10,000 “items” and page through that instead. If you chose to use Entity Framework, the code is exactly the same, except you initialise a DbContext and query a Set instead.

PagedResponse

Add the following code;

public class PagedResponse<T>
{
    public PagedResponse(IEnumerable<T> data, int pageIndex, int pageSize)
    {
        Data = data.Skip((pageIndex - 1)*pageSize).Take(pageSize).ToList();
        Total = data.Count();
    }

    public int Total { get; set; }
    public ICollection<T> Data { get; set; }
}

PagedResponse exposes two properties. Total and Data. Total is the total number of records in the set. Data is the subset of data itself. We have to include the total number of items in the set so that ng2-pagination knows how many pages there are in total. It will then generate some links/buttons to enable the user to skip forward several pages at once (or as many as required).

Enable CORS (Cross Origin Resource Sharing)

To enable communication between our client and server, we need to enable Cross Origin Resource Sharing (CORS) as they will be (at least during development) running under different servers.

To enable CORS, first install the following package (using NuGet);

Microsoft.AspNet.WebApi.Cors

Now open up WebApiConfig.cs and add the following to the Register method;

var cors = new EnableCorsAttribute("*", "*", "*");
config.EnableCors(cors);
config.MessageHandlers.Add(new PreflightRequestsHandler());

And add a new nested class, as shown;

public class PreflightRequestsHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (request.Headers.Contains("Origin") && request.Method.Method == "OPTIONS")
        {
            var response = new HttpResponseMessage {StatusCode = HttpStatusCode.OK};
            response.Headers.Add("Access-Control-Allow-Origin", "*");
            response.Headers.Add("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization");
            response.Headers.Add("Access-Control-Allow-Methods", "*");
            var tsc = new TaskCompletionSource<HttpResponseMessage>();
            tsc.SetResult(response);
            return tsc.Task;
        }
        return base.SendAsync(request, cancellationToken);
    }
}

Now when Angular makes a request for data, it will send an OPTIONS header first to check access. This request will be intercepted above and will reply with Access-Control-Allow-Origin header with value any (represented with an asterisk).

Format JSON response

If, like me, you hate Pascal Case JavaScript (ThisIsPascalCase), you will want to add the following code to your Application_Start method;

var formatters = GlobalConfiguration.Configuration.Formatters;
var jsonFormatter = formatters.JsonFormatter;
var settings = jsonFormatter.SerializerSettings;
settings.Formatting = Formatting.Indented;
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();

Now lets set up the front end.

Front-end Angular 2 and ng2-pagination

If you head over the to Angular 2 quickstart, you will see there is a link to download the quick start source code. Go ahead and do that.

I’ll wait here.

Ok you’re done? Lets continue.

Install ng2-pagination and optionally bootstrap and jquery if you want this to look pretty. Skip those two if you don’t mind.

npm install --save-dev ng2-pagination bootstrap jquery

Open up index.html and add the following scripts to the header;

<script src="node_modules/angular2/bundles/http.dev.js"></script>
<script src="node_modules/ng2-pagination/dist/ng2-pagination-bundle.js"></script>

<script src="node_modules/jquery/dist/jquery.js"></script>
<script src="node_modules/bootstrap/dist/js/bootstrap.js"></script>

Also add a link to the bootstrap CSS file, if required.

<link rel="stylesheet" href="node_modules/bootstrap/dist/css/bootstrap.css">

Notice we pulled in Http? We will use that for querying our back-end.

Add a new file to the app folder, called app.component.html. We will use this instead of having all of our markup and TypeScript code in the same file.

ng2-pagination

Open app.component.ts, delete everything, and add the following code instead;

import {Component, OnInit} from 'angular2/core';
import {Http, HTTP_PROVIDERS} from 'angular2/http';
import {Observable} from 'rxjs/Rx';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/do';
import {PaginatePipe, PaginationService, PaginationControlsCmp, IPaginationInstance} from 'ng2-pagination';

export interface PagedResponse<T> {
    total: number;
    data: T[];
}

export interface DataModel {
    id: number;
    data: string;
}

@Component({
    selector: 'my-app',
    templateUrl: './app/app.component.html',
    providers: [HTTP_PROVIDERS, PaginationService],
    directives: [PaginationControlsCmp],
    pipes: [PaginatePipe]
})
export class AppComponent implements OnInit {
    private _data: Observable<DataModel[]>;
    private _page: number = 1;
    private _total: number;

    constructor(private _http: Http) {

    }
}

A quick walk-through of what I’ve changed;

  • Removed inline HTML and linked to the app.component.html file you created earlier. (This leads to cleaner seperation of concerns).
  • Imported Observable, Map, and Do from RX.js. This will enable us to write cleaner async code without having to rely on promises.
  • Imported a couple of class from angular2/http so that we can use the native Http client, add added HTTP_PROVIDERS as a provider.
  • Imported various objects required by ng2-pagination, and added to providers, directives and pipes so we can access them through our view (which we will create later).
  • Defined two interfaces, one called PagedResponse<T> and DataModel. You may notice these are identical to those we created in our Web API project.
  • Add some variables, we will discuss shortly.

We’ve got the basics in place that we need to call our data service and pass the data over to ng2-pagination. Now lets actually implement that process.

Retrieving data using Angular 2 Http

Eagle eyed readers may have noticed that I’ve pulled in and implemented the OnInit method, but not implemented the ngOnInit method yet.

Add the following method;

ngOnInit() {
    this.getPage(1);
}

When the page loads and is initialised, we want to automatically grab the first page of data. The above method will make that happen.

Note: If you are unfamiliar with ngOnInit, please read this helpful documentation on lifecycle hooks.

Now add the following code;

getPage(page: number) {
this._data = this._http.get("http://localhost:52472/api/data/" + page + "/10")
    .do((res: any) => {
        this._total = res.json().total;
        this._page = page;
    })
    .map((res: any) => res.json().data);
}

The above method does the following;

  • Calls out to our Web API (you may need to change the port number depending on your set up)
  • Passes in two values, the first being the current page number, the second being the number of results to retrieve
  • Stores a reference to the _data variable. Once the request is complete, do is executed.
  • Do is a function (an arrow function in this case) that is executed for each item in the collection received from the server. We’ve set up our Web API method to return a single object, of type PagedResponse, so this method will only be executed once. We take this opportunity to update the current page (which is the same as the page number passed into the method in the first place) and the _total variable, which stores the total number of items in the entire set (not just the paged number).
  • Map is then used to pull the data from the response and convert it to JSON. The way that RX.js works is that an event will be emitted to notify that the collection has changed.

Implement the view

Open app.component.html and add the following code;

<div class="container">
    <table class="table table-striped table-hover">
        <thead>
            <tr>
                <th>Id</th>
                <th>Text</th>
            </tr>
        </thead>
        <tbody>
            <tr *ngFor="#item of _data | async | paginate: { id: 'server', itemsPerPage: 10, currentPage: _page, totalItems: _total }">
                <td>{{item.id}}</td>
                <td>{{item.text}}</td>
            </tr>
        </tbody>
    </table>    
    <pagination-controls (pageChange)="getPage($event)" id="server"></pagination-controls>
</div>

There are a few key points on interest here;

  • On our repeater (*ngFor), we’ve used the async pipe. Under the hood, Angular subscribes to the Observable we pass to it and resolves the value automatically (asynchronously) when it becomes available.
  • We use the paginate pipe, and pass in an object containing the current page and total number of pages so ng2-pagination can render itself properly.
  • Add the pagination-controls directive, which calls back to our getPage function when the user clicks a page number that they are not currently on.

As we know the current page, and the number of items per page, we can efficiently pass this to the Web API to only retrieve data specific data.

So, why bother?

Some benefits;

  • Potentially reduce initial page load time, because less data has to be retrieved from the database, serialized and transferred over.
  • Reduced memory usage on the client. All 10,000 records would have to be held in memory!
  • Reduced processing time, as only the paged data is stored in memory, there are a lot less records to iterate through!

Drawbacks;

  • Lots of small requests for data could reduce server performance (due to chat. Using an effective caching strategy is key here.
  • User experience could be degegrated. If the server is slow to respond, the client may appear to be slow and could frustrate the user.

Summary

Using ng2-pagination, and with help from RX.js, we can easily add pagination to our pages. Doing so has the potential to reduce server load and initial page render time, and thus can result in a better user experience. A good caching strategy and server response times are important considerations when going to production.

Create a RESTful API with authentication using Web API and Jwt

Web API is a feature of the ASP .NET framework that dramatically simplifies building RESTful (REST like) HTTP services that are cross platform and device and browser agnostic. With Web API, you can create endpoints that can be accessed using a combination of descriptive URLs and HTTP verbs. Those endpoints can serve data back to the caller as either JSON or XML that is standards compliant. With JSON Web Tokens (Jwt), which are typically stateless, you can add an authentication and authorization layer enabling you to restrict access to some or all of your API.

The purpose of this tutorial is to develop the beginnings of a Book Store API, using Microsoft Web API with (C#), which authenticates and authorizes each requests, exposes OAuth2 endpoints, and returns data about books and reviews for consumption by the caller. The caller in this case will be Postman, a useful utility for querying API’s.

In a follow up to this post we will write a front end to interact with the API directly.

Set up

Open Visual Studio (I will be using Visual Studio 2015 Community edition, you can use whatever version you like) and create a new Empty project, ensuring you select the Web API option;

Where you save the project is up to you, but I will create my projects under *C:\Source*. For simplicity you might want to do the same.

New Project

Next, packages.

Packages

Open up the packages.config file. Some packages should have already been added to enable Web API itself. Please add the the following additional packages;

install-package EntityFramework
install-package Microsoft.AspNet.Cors
install-package Microsoft.AspNet.Identity.Core
install-package Microsoft.AspNet.Identity.EntityFramework
install-package Microsoft.AspNet.Identity.Owin
install-package Microsoft.AspNet.WebApi.Cors
install-package Microsoft.AspNet.WebApi.Owin
install-package Microsoft.Owin.Cors
install-package Microsoft.Owin.Security.Jwt
install-package Microsoft.Owin.Host.SystemWeb
install-package System.IdentityModel.Tokens.Jwt
install-package Thinktecture.IdentityModel.Core

These are the minimum packages required to provide data persistence, enable CORS (Cross-Origin Resource Sharing), and enable generating and authenticating/authorizing Jwt’s.

Entity Framework

We will use Entity Framework for data persistence, using the Code-First approach. Entity Framework will take care of generating a database, adding tables, stored procedures and so on. As an added benefit, Entity Framework will also upgrade the schema automatically as we make changes. Entity Framework is perfect for rapid prototyping, which is what we are in essence doing here.

Create a new IdentityDbContext called BooksContext, which will give us Users, Roles and Claims in our database. I like to add this under a folder called Core, for organization. We will add our entities to this later.

namespace BooksAPI.Core
{
    using Microsoft.AspNet.Identity.EntityFramework;

    public class BooksContext : IdentityDbContext
    {

    }
}

Claims are used to describe useful information that the user has associated with them. We will use claims to tell the client which roles the user has. The benefit of roles is that we can prevent access to certain methods/controllers to a specific group of users, and permit access to others.

Add a DbMigrationsConfiguration class and allow automatic migrations, but prevent automatic data loss;

namespace BooksAPI.Core
{
    using System.Data.Entity.Migrations;

    public class Configuration : DbMigrationsConfiguration&lt;BooksContext&gt;
    {
        public Configuration()
        {
            AutomaticMigrationsEnabled = true;
            AutomaticMigrationDataLossAllowed = false;
        }
    }
}

Whilst losing data at this stage is not important (we will use a seed method later to populate our database), I like to turn this off now so I do not forget later.

Now tell Entity Framework how to update the database schema using an initializer, as follows;

namespace BooksAPI.Core
{
    using System.Data.Entity;

    public class Initializer : MigrateDatabaseToLatestVersion&lt;BooksContext, Configuration&gt;
    {
    }
}

This tells Entity Framework to go ahead and upgrade the database to the latest version automatically for us.

Finally, tell your application about the initializer by updating the Global.asax.cs file as follows;

namespace BooksAPI
{
    using System.Data.Entity;
    using System.Web;
    using System.Web.Http;
    using Core;

    public class WebApiApplication : HttpApplication
    {
        protected void Application_Start()
        {
            GlobalConfiguration.Configure(WebApiConfig.Register);
            Database.SetInitializer(new Initializer());
        }
    }
}

Data Provider

By default, Entity Framework will configure itself to use LocalDB. If this is not desirable, say you want to use SQL Express instead, you need to make the following adjustments;

Open the Web.config file and delete the following code;

<entityFramework>
    <defaultConnectionFactory type="System.Data.Entity.Infrastructure.LocalDbConnectionFactory, EntityFramework">
        <parameters>
            <parameter value="mssqllocaldb" />
        </parameters>
    </defaultConnectionFactory>
    <providers>
        <provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer" />
    </providers>
</entityFramework>

And add the connection string;

<connectionStrings>
    <add name="BooksContext" providerName="System.Data.SqlClient" connectionString="Server=.;Database=Books;Trusted_Connection=True;" />
</connectionStrings>

Now we’re using SQL Server directly (whatever flavour that might be) rather than LocalDB.

JSON

Whilst we’re here, we might as well configure our application to return camel-case JSON (thisIsCamelCase), instead of the default pascal-case (ThisIsPascalCase).

Add the following code to your Application_Start method;

var formatters = GlobalConfiguration.Configuration.Formatters;
var jsonFormatter = formatters.JsonFormatter;
var settings = jsonFormatter.SerializerSettings;
settings.Formatting = Formatting.Indented;
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();

There is nothing worse than pascal-case JavaScript.

CORS (Cross-Origin Resource Sharing)

Cross-Origin Resource Sharing, or CORS for short, is when a client requests access to a resource (an image, or say, data from an endpoint) from an origin (domain) that is different from the domain where the resource itself originates.

This step is completely optional. We are adding in CORS support here because when we come to write our client app in subsequent posts that follow on from this one, we will likely use a separate HTTP server (for testing and debugging purposes). When released to production, these two apps would use the same host (Internet Information Services (IIS)).

To enable CORS, open WebApiConfig.cs and add the following code to the beginning of the Register method;

var cors = new EnableCorsAttribute("*", "*", "*");
config.EnableCors(cors);
config.MessageHandlers.Add(new PreflightRequestsHandler());

And add the following class (in the same file if you prefer for quick reference);

public class PreflightRequestsHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (request.Headers.Contains("Origin") && request.Method.Method == "OPTIONS")
        {
            var response = new HttpResponseMessage {StatusCode = HttpStatusCode.OK};
            response.Headers.Add("Access-Control-Allow-Origin", "*");
            response.Headers.Add("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization");
            response.Headers.Add("Access-Control-Allow-Methods", "*");
            var tsc = new TaskCompletionSource<HttpResponseMessage>();
            tsc.SetResult(response);
            return tsc.Task;
        }
        return base.SendAsync(request, cancellationToken);
    }
}

In the CORS workflow, before sending a DELETE, PUT or POST request, the client sends an OPTIONS request to check that the domain from which the request originates is the same as the server. If the request domain and server domain are not the same, then the server must include various access headers that describe which domains have access. To enable access to all domains, we just respond with an origin header (Access-Control-Allow-Origin) with an asterisk to enable access for all.

The Access-Control-Allow-Headers header describes which headers the API can accept/is expecting to receive. The Access-Control-Allow-Methods header describes which HTTP verbs are supported/permitted.

See Mozilla Developer Network (MDN) for a more comprehensive write-up on Cross-Origin Resource Sharing (CORS).

Data Model

With Entity Framework configured, lets create our data structure. The API will expose books, and books will have reviews.

Under the Models folder add a new class called Book. Add the following code;

namespace BooksAPI.Models
{
    using System.Collections.Generic;

    public class Book
    {
        public int Id { get; set; }
        public string Title { get; set; }
        public string Description { get; set; }
        public decimal Price { get; set; }
        public string ImageUrl { get; set; }

        public virtual List<Review> Reviews { get; set; }
    }
}

And add Review, as shown;

namespace BooksAPI.Models
{
    public class Review
    {
        public int Id { get; set; }    
        public string Description { get; set; }    
        public int Rating { get; set; }
        public int BookId { get; set; }
    }
}

Add these entities to the IdentityDbContext we created earlier;

public class BooksContext : IdentityDbContext
{
    public DbSet<Book> Books { get; set; }
    public DbSet<Review> Reviews { get; set; }
}

Be sure to add in the necessary using directives.

A couple of helpful abstractions

We need to abstract a couple of classes that we need to make use of, in order to keep our code clean and ensure that it works correctly.

Under the Core folder, add the following classes;

public class BookUserManager : UserManager<IdentityUser>
{
    public BookUserManager() : base(new BookUserStore())
    {
    }
}

We will make heavy use of the UserManager<T> in our project, and we don’t want to have to initialise it with a UserStore<T> every time we want to make use of it. Whilst adding this is not strictly necessary, it does go a long way to helping keep the code clean.

Now add another class for the UserStore, as shown;

public class BookUserStore : UserStore&lt;IdentityUser&gt;
{
    public BookUserStore() : base(new BooksContext())
    {
    }
}

This code is really important. If we fail to tell the UserStore which DbContext to use, it falls back to some default value.

A network-related or instance-specific error occurred while establishing a connection to SQL Server

I’m not sure what the default value is, all I know is it doesn’t seem to correspond to our applications DbContext. This code will help prevent you from tearing your hair out later wondering why you are getting the super-helpful error message shown above.

API Controller

We need to expose some data to our client (when we write it). Lets take advantage of Entity Frameworks Seed method. The Seed method will pre-populate some books and reviews automatically for us.

Instead of dropping the code in directly for this class (it is very long), please refer to the Configuration.cs file on GitHub.

This code gives us a little bit of starting data to play with, instead of having to add a bunch of data manually each time we make changes to our schema that require the database to be re-initialized (not really in our case as we have an extremely simple data model, but in larger applications this is very useful).

Books Endpoint

Next, we want to create the RESTful endpoint that will retrieve all the books data. Create a new Web API controller called BooksController and add the following;

public class BooksController : ApiController
{
    [HttpGet]
    public async Task<IHttpActionResult> Get()
    {
        using (var context = new BooksContext())
        {
            return Ok(await context.Books.Include(x => x.Reviews).ToListAsync());
        }
    }
}

With this code we are fully exploiting recent changes to the .NET framework; the introduction of async and await. Writing asynchronous code in this manner allows the thread to be released whilst data (Books and Reviews) is being retrieved from the database and converted to objects to be consumed by our code. When the asynchronous operation is complete, the code picks up where it was up to and continues executing. (By which, we mean the hydrated data objects are passed to the underlying framework and converted to JSON/XML and returned to the client).

Reviews Endpoint

We’re also going to enable authorized users to post reviews and delete reviews. For this we will need a ReviewsController with the relevant Post and Delete methods. Add the following code;

Create a new Web API controller called ReviewsController and add the following code;

public class ReviewsController : ApiController
{
    [HttpPost]
    public async Task<IHttpActionResult> Post([FromBody] ReviewViewModel review)
    {
        using (var context = new BooksContext())
        {
            var book = await context.Books.FirstOrDefaultAsync(b => b.Id == review.BookId);
            if (book == null)
            {
                return NotFound();
            }

            var newReview = context.Reviews.Add(new Review
            {
                BookId = book.Id,
                Description = review.Description,
                Rating = review.Rating
            });

            await context.SaveChangesAsync();
            return Ok(new ReviewViewModel(newReview));
        }
    }

    [HttpDelete]
    public async Task<IHttpActionResult> Delete(int id)
    {
        using (var context = new BooksContext())
        {
            var review = await context.Reviews.FirstOrDefaultAsync(r => r.Id == id);
            if (review == null)
            {
                return NotFound();
            }

            context.Reviews.Remove(review);
            await context.SaveChangesAsync();
        }
        return Ok();
    }
}

There are a couple of good practices in play here that we need to highlight.

The first method, Post allows the user to add a new review. Notice the parameter for the method;

[FromBody] ReviewViewModel review

The [FromBody] attribute tells Web API to look for the data for the method argument in the body of the HTTP message that we received from the client, and not in the URL. The second parameter is a view model that wraps around the Review entity itself. Add a new folder to your project called ViewModels, add a new class called ReviewViewModel and add the following code;

public class ReviewViewModel
{
    public ReviewViewModel()
    {
    }

    public ReviewViewModel(Review review)
    {
        if (review == null)
        {
            return;
        }

        BookId = review.BookId;
        Rating = review.Rating;
        Description = review.Description;
    }

    public int BookId { get; set; }
    public int Rating { get; set; }
    public string Description { get; set; }

    public Review ToReview()
    {
        return new Review
        {
            BookId = BookId,
            Description = Description,
            Rating = Rating
        };
    }
}

We are just copying all he properties from the Review entity to the ReviewViewModel entity and vice-versa. So why bother? First reason, to help mitigate a well known under/over-posting vulnerability (good write up about it here) inherent in most web services. Also, it helps prevent unwanted information being sent to the client. With this approach we have to explicitly expose data to the client by adding properties to the view model.

For this scenario, this approach is probably a bit overkill, but I highly recommend it keeping your application secure is important, as well as is the need to prevent leaking of potentially sensitive information. A tool I’ve used in the past to simplify this mapping code is AutoMapper. I highly recommend checking out.

Important note: In order to keep our API RESTful, we return the newly created entity (or its view model representation) back to the client for consumption, removing the need to re-fetch the entire data set.

The Delete method is trivial. We accept the Id of the review we want to delete as a parameter, then fetch the entity and finally remove it from the collection. Calling SaveChangesAsync will make the change permanent.

Meaningful response codes

We want to return useful information back to the client as much as possible. Notice that the Post method returns NotFound(), which translates to a 404 HTTP status code, if the corresponding Book for the given review cannot be found. This is useful for client side error handling. Returning Ok() will return 200 (HTTP ‘Ok’ status code), which informs the client that the operation was successful.

Authentication and Authorization Using OAuth and JSON Web Tokens (JWT)

My preferred approach for dealing with authentication and authorization is to use JSON Web Tokens (JWT). We will open up an OAuth endpoint to client credentials and return a token which describes the users claims. For each of the users roles we will add a claim (which could be used to control which views the user has access to on the client side).

We use OWIN to add our OAuth configuration into the pipeline. Add a new class to the project called Startup.cs and add the following code;

using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof (BooksAPI.Startup))]

namespace BooksAPI
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureOAuth(app);
        }
    }
}

Notice that Startup is a partial class. I’ve done that because I want to keep this class as simple as possible, because as the application becomes more complicated and we add more and more middle-ware, this class will grow exponentially. You could use a static helper class here, but the preferred method from the MSDN documentation seems to be leaning towards using partial classes specifically.

Under the App_Start folder add a new class called Startup.OAuth.cs and add the following code;

using System;
using System.Configuration;
using BooksAPI.Core;
using BooksAPI.Identity;
using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.DataHandler.Encoder;
using Microsoft.Owin.Security.Jwt;
using Microsoft.Owin.Security.OAuth;
using Owin;

namespace BooksAPI
{
    public partial class Startup
    {
        public void ConfigureOAuth(IAppBuilder app)
        {            
        }
    }
}

Note. When I wrote this code originally I encountered a quirk. After spending hours pulling out my hair trying to figure out why something was not working, I eventually discovered that the ordering of the code in this class is very important. If you don’t copy the code in the exact same order, you may encounter unexpected behaviour. Please add the code in the same order as described below.

OAuth secrets

First, add the following code;

var issuer = ConfigurationManager.AppSettings["issuer"];
var secret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["secret"]);
  • Issuer – a unique identifier for the entity that issued the token (not to be confused with Entity Framework’s entities)
  • Secret – a secret key used to secure the token and prevent tampering

I keep these values in the Web configuration file (Web.config). To be precise, I split these values out into their own configuration file called keys.config and add a reference to that file in the main Web.config. I do this so that I can exclude just the keys from source control by adding a line to my .gitignore file.

To do this, open Web.config and change the <appSettings> section as follows;

<appSettings file="keys.config">
</appSettings>

Now add a new file to your project called keys.config and add the following code;

<appSettings>
  <add key="issuer" value="http://localhost/"/>
  <add key="secret" value="IxrAjDoa2FqElO7IhrSrUJELhUckePEPVpaePlS_Xaw"/>
</appSettings>

Adding objects to the OWIN context

We can make use of OWIN to manage instances of objects for us, on a per request basis. The pattern is comparable to IoC, in that you tell the “container” how to create an instance of a specific type of object, then request the instance using a Get<T> method.

Add the following code;

app.CreatePerOwinContext(() => new BooksContext());
app.CreatePerOwinContext(() => new BookUserManager());

The first time we request an instance of BooksContext for example, the lambda expression will execute and a new BooksContext will be created and returned to us. Subsequent requests will return the same instance.

Important note: The life-cycle of object instance is per-request. As soon as the request is complete, the instance is cleaned up.

Enabling Bearer Authentication/Authorization

To enable bearer authentication, add the following code;

app.UseJwtBearerAuthentication(new JwtBearerAuthenticationOptions
{
    AuthenticationMode = AuthenticationMode.Active,
    AllowedAudiences = new[] { "Any" },
    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
    {
        new SymmetricKeyIssuerSecurityTokenProvider(issuer, secret)
    }
});

The key takeaway of this code;

  • State who is the audience (we’re specifying “Any” for the audience, as this is a required field but we’re not fully implementing it).
  • State who is responsible for generating the tokens. Here we’re using SymmetricKeyIssuerSecurityTokenProvider and passing it our secret key to prevent tampering. We could use the X509CertificateSecurityTokenProvider, which uses a X509 certificate to secure the token (but I’ve found these to be overly complex in the past and I prefer a simpler implementation).

This code adds JWT bearer authentication to the OWIN pipeline.

Enabling OAuth

We need to expose an OAuth endpoint so that the client can request a token (by passing a user name and password).

Add the following code;

app.UseOAuthAuthorizationServer(new OAuthAuthorizationServerOptions
{
    AllowInsecureHttp = true,
    TokenEndpointPath = new PathString("/oauth2/token"),
    AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(30),
    Provider = new CustomOAuthProvider(),
    AccessTokenFormat = new CustomJwtFormat(issuer)
});

Some important notes with this code;

  • We’re going to allow insecure HTTP requests whilst we are in development mode. You might want to disable this using a #IF Debug directive so that you don’t allow insecure connections in production.
  • Open an endpoint under /oauth2/token that accepts post requests.
  • When generating a token, make it expire after 30 minutes (1800 seconds).
  • We will use our own provider, CustomOAuthProvider, and formatter, CustomJwtFormat, to take care of authentication and building the actual token itself.

We need to write the provider and formatter next.

Formatting the JWT

Create a new class under the Identity folder called CustomJwtFormat.cs. Add the following code;

namespace BooksAPI.Identity
{
    using System;
    using System.Configuration;
    using System.IdentityModel.Tokens;
    using Microsoft.Owin.Security;
    using Microsoft.Owin.Security.DataHandler.Encoder;
    using Thinktecture.IdentityModel.Tokens;

    public class CustomJwtFormat : ISecureDataFormat<AuthenticationTicket>
    {
        private static readonly byte[] _secret = TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["secret"]);
        private readonly string _issuer;

        public CustomJwtFormat(string issuer)
        {
            _issuer = issuer;
        }

        public string Protect(AuthenticationTicket data)
        {
            if (data == null)
            {
                throw new ArgumentNullException(nameof(data));
            }

            var signingKey = new HmacSigningCredentials(_secret);
            var issued = data.Properties.IssuedUtc;
            var expires = data.Properties.ExpiresUtc;

            return new JwtSecurityTokenHandler().WriteToken(new JwtSecurityToken(_issuer, null, data.Identity.Claims, issued.Value.UtcDateTime, expires.Value.UtcDateTime, signingKey));
        }

        public AuthenticationTicket Unprotect(string protectedText)
        {
            throw new NotImplementedException();
        }
    }
}

This is a complicated looking class, but its pretty straightforward. We are just fetching all the information needed to generate the token, including the claims, issued date, expiration date, key and then we’re generating the token and returning it back.

Please note: Some of the code we are writing today was influenced by JSON Web Token in ASP.NET Web API 2 using OWIN by Taiseer Joudeh. I highly recommend checking it out.

The authentication bit

We’re almost there, honest! Now we want to authenticate the user.

using System.Linq;
using System.Security.Claims;
using System.Security.Principal;
using System.Threading;
using System.Threading.Tasks;
using System.Web;
using BooksAPI.Core;
using Microsoft.AspNet.Identity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.AspNet.Identity.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.OAuth;

namespace BooksAPI.Identity
{
    public class CustomOAuthProvider : OAuthAuthorizationServerProvider
    {
        public override Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {
            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] {"*"});

            var user = context.OwinContext.Get<BooksContext>().Users.FirstOrDefault(u => u.UserName == context.UserName);
            if (!context.OwinContext.Get<BookUserManager>().CheckPassword(user, context.Password))
            {
                context.SetError("invalid_grant", "The user name or password is incorrect");
                context.Rejected();
                return Task.FromResult<object>(null);
            }

            var ticket = new AuthenticationTicket(SetClaimsIdentity(context, user), new AuthenticationProperties());
            context.Validated(ticket);

            return Task.FromResult<object>(null);
        }

        public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            context.Validated();
            return Task.FromResult<object>(null);
        }

        private static ClaimsIdentity SetClaimsIdentity(OAuthGrantResourceOwnerCredentialsContext context, IdentityUser user)
        {
            var identity = new ClaimsIdentity("JWT");
            identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName));
            identity.AddClaim(new Claim("sub", context.UserName));

            var userRoles = context.OwinContext.Get<BookUserManager>().GetRoles(user.Id);
            foreach (var role in userRoles)
            {
                identity.AddClaim(new Claim(ClaimTypes.Role, role));
            }

            return identity;
        }
    }
}

As we’re not checking the audience, when ValidateClientAuthentication is called we can just validate the request. When the request has a grant_type of password, which all our requests to the OAuth endpoint will have, the above GrantResourceOwnerCredentials method is executed. This method authenticates the user and creates the claims to be added to the JWT.

Testing

There are 2 tools you can use for testing this.

Technique 1 – Using the browser

Open up a web browser, and navigate to the books URL.

Testing with the web browser

You will see the list of books, displayed as XML. This is because Web API can serve up data either as XML or as JSON. Personally, I do not like XML, JSON is my choice these days.

Technique 2 (Preferred) – Using Postman

To make Web API respond in JSON we need to send along a Accept header. The best tool to enable use to do this (for Google Chrome) is Postman. Download it and give it a go if you like.

Drop the same URL into the Enter request URL field, and click Send. Notice the response is in JSON;

Postman response in JSON

This worked because Postman automatically adds the Accept header to each request. You can see this by clicking on the Headers tab. If the header isn’t there and you’re still getting XML back, just add the header as shown in the screenshot and re-send the request.

To test the delete method, change the HTTP verb to Delete and add the ReviewId to the end of the URL. For example; http://localhost:62996/api/reviews/9

Putting it all together

First, we need to restrict access to our endpoints.

Add a new file to the App_Start folder, called FilterConfig.cs and add the following code;

public class FilterConfig
{
    public static void Configure(HttpConfiguration config)
    {
        config.Filters.Add(new AuthorizeAttribute());
    }
}

And call the code from Global.asax.cs as follows;

GlobalConfiguration.Configure(FilterConfig.Configure);

Adding this code will restrict access to all endpoints (except the OAuth endpoint) to requests that have been authenticated (a request that sends along a valid Jwt).

You have much more fine-grain control here, if required. Instead of adding the above code, you could instead add the AuthorizeAttribute to specific controllers or even specific methods. The added benefit here is that you can also restrict access to specific users or specific roles;

Example code;

[Authorize(Roles = "Admin")]

The roles value (“Admin”) can be a comma-separated list. For us, restricting access to all endpoints will suffice.

To test that this code is working correctly, simply make a GET request to the books endpoint;

GET http://localhost:62996/api/books

You should get the following response;

{
  "message": "Authorization has been denied for this request."
}

Great its working. Now let’s fix that problem.

Make a POST request to the OAuth endpoint, and include the following;

  • Headers
    • Accept application/json
    • Accept-Language en-gb
    • Audience Any
  • Body
    • username administrator
    • password administrator123
    • grant_type password

Shown in the below screenshot;

OAuth Request

Make sure you set the message type as x-www-form-urlencoded.

If you are interested, here is the raw message;

POST /oauth2/token HTTP/1.1
Host: localhost:62996
Accept: application/json
Accept-Language: en-gb
Audience: Any
Content-Type: application/x-www-form-urlencoded
Cache-Control: no-cache
Postman-Token: 8bc258b2-a08a-32ea-3cb2-2e7da46ddc09

username=administrator&password=administrator123&grant_type=password

The form data has been URL encoded and placed in the message body.

The web service should authenticate the request, and return a token (Shown in the response section in Postman). You can test that the authentication is working correctly by supplying an invalid username/password. In this case, you should get the following reply;

{
  "error": "invalid_grant"
}

This is deliberately vague to avoid giving any malicious users more information than they need.

Now to get a list of books, we need to call the endpoint passing in the token as a header.

Change the HTTP verb to GET and change the URL to; http://localhost:62996/api/books.

On the Headers tab in Postman, add the following additional headers;

Authorization Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1bmlxdWVfbmFtZSI6ImFkbWluaXN0cmF0b3IiLCJzdWIiOiJhZG1pbmlzdHJhdG9yIiwicm9sZSI6IkFkbWluaXN0cmF0b3IiLCJpc3MiOiJodHRwOi8vand0YXV0aHpzcnYuYXp1cmV3ZWJzaXRlcy5uZXQiLCJhdWQiOiJBbnkiLCJleHAiOjE0NTgwNDI4MjgsIm5iZiI6MTQ1ODA0MTAyOH0.uhrqQW6Ik_us1lvDXWJNKtsyxYlwKkUrCGXs-eQRWZQ

See screenshot below;

Authorization Header

Success! We have data from our secure endpoint.

Summary

In this introduction we looked at creating a project using Web API to issue and authenticate Jwt (JSON Web Tokens). We created a simple endpoint to retrieve a list of books, and also added the ability to get a specific book/review and delete reviews in a RESTful way.

This project is the foundation for subsequent posts that will explore creating a rich client side application, using modern JavaScript frameworks, which will enable authentication and authorization.

How to debug websites on your mobile device using Google Chrome

I can’t believe I have survived this long as a web developer without knowing you can debug websites (JavaScript, CSS, HTML, TypeScript etc.) directly on your mobile device using Google Chrome developer tools. If you are currently using emulators/simulators or testing solutions such as Browser Stack, you will love this easy and free solution.

Be warned, however, you will be expected to download 6+ gigabytes of stuff before the magic begins.

I’ve only tested this on my Samsung Galaxy S6 Edge (running Android 5.1.1) but I believe it also works on an iPhone.

Prerequisite Software

Before connecting your phone to your computer, please ensure you have all of the following software installed;

Set up your device

Setting up your device is pretty simple. Start by connecting it to your computer with a USB cable and activate “Developer Mode” via the settings menu. Rather than explain all the individal steps, just follow this helpful guide.

Time to start debugging

If you haven’t already done so, go ahead and connect your device to your PC via USB cable.

Launch Google Chrome on your device, and launch Google Chrome on your computer. Navigate to chrome://inspect and your device should be listed.

Google Chrome Inspect Devices

If your device is not listed, you probably need to restart the ADB server. Run the commands as shown below from a standard or administrator command prompt;

Restart ADB Server

If you still cannot see your device listed, please check out the troubleshooting guide.

When ready, click inspect just below the title of the tab with your open web page – or use the convenient Open tab with url field to quickly open a new tab.

Google Chrome will now open a full screen Developer tools window, with a preview of the web page on the left, with a console window and other helpful tabs (including everything you are used to when debugging web pages in the desktop browser).

BBC in Chrome Remote Debugger

You can set breakpoints, use the debugger keyword, and debug in the same way you’re used to.

BBC Headline Changed

Any changes made on the PC are automatically and instantly reflected on the device, and vice versa!

Summary

Google Chrome has an incredibly useful feature that allows for remote debugging on your Android or IOS device using Google Chrome developer tools. The setup process involves downloading over 6GB of additional stuff, but it feels like a small price to pay for such a useful feature.

How to avoid burnout

You work hard 7 days a week, and you do your best to stay up to date with the latest industry trends.  Inevitably you will become demoralized and demotivated and eventually suffer a partial or full-on collapse where all your progress comes to a grinding halt.  After a period of time (days, weeks or months!) you get back on track and pick up where you left off, eventually leading to the inevitable burnout cycle where you end up back where you were.  I’ve been through this cycle several times, and I’ve even blogged about it before, but now I have learnt the ultimate techniques to break the endless cycle and find a more maintainable work-life balance.

Here are my 5 ultimate tips to avoid burnout.

Stop

Start by reducing your workload.

Stop

You are probably doing some or all of the following on a regular basis;

  1. Watching training videos, doing some form of professional online training
  2. Freelance or other paid work for friends, family, or professionally
  3. Contributing to open source, or some form of unpaid work where you have responsibilities and deadlines
  4. Your day job

You probably can’t stop doing your day job, so you will want to give that the highest precedence.  However, I can’t tell you how many people I’ve met in my life who “forget” to take paid leave (holiday days) on a regular basis.  I’ve known people who still have 15 or more holiday days available in early December, and whom either lose those days or just take the money instead.  You should ensure that you regularly take some time off from work, at least once a quarter, and actually have time to yourself or do something relaxing with close family and friends.

If you’re doing online training on a regular basis, you shouldn’t stress about it.  Don’t try and watch 12 hours of Pluralsight videos a 3x speed every night…stretch it out over a week or longer, you will absorb the information better and ultimately get more from the training than you probably will otherwise.

Freelance or other paid work on top of your day job is a recipe for disaster.   The stress of meeting additional deadlines, not being able to have face-to-face discussions with your client, and generally working 15 hours a day will rapidly accelerate burnout.  Try not to take on freelance work if possible, or try and cap it at 1 project at any one time.  The same goes for open source or otherwise unpaid work.  Whilst typically not as stressful, the pressure of expectation can still sit on your shoulders, so try and keep it to a minimum.

 

Get a hobby

But software development is your hobby, right?  For me that was certainly the case.  I started programming as a hobbyist and eventually became a professional.  Whilst I still consider software development to be a hobby, I enjoy it a lot, I’ve since broadened my interests and now consider myself to have several hobbies.

Some ideas for new hobbies;

  1. Some form of physical exercise.  It might be working out (see my post on how I got fit), walking, hiking, skiing, cycling, or anything you like!  Exercise is excellent for stress relief and refocusing the mind.  As well, exercising will lead to a healthier lifestyle and better sleep/eating patterns, which will lead to having more energy, which will contribute significantly to reducing burnout.
  2. Learn a new skill.  I am in the process of teaching myself several new skills; DIY, plumbing, developing an understanding of the sciences (including Quantum theory, advanced mathematics, astronomy/planetary science), and more.  But here is my killer advice; learn life skills.  What I mean by life skills is this; if you learn how to, for example, put up a shelf…this is a life skill.  The process of putting up a shelf is unlikely to change much.  Screws, nails, hammers, etc are pretty constant things and probably won’t change much.  In 10 years you will still know how to put up a shelf.  That’s the common problem with our industry, the technology evolves so rapidly that 90% of what you learnt 5 years ago is irrelevant.

Whatever you decide to do, try and have at least one other hobby, ideally one that other people can get involved with too.

 

Read

I didn’t start reading books on a regular basis until I was 25 years old.  The first book I read, by choice and not because somebody was forcing me to, was The Hobbit.  I loved the book and I was instantly hooked.  If you want a good science fiction read, I highly recommend checking out The Martian, its awesome!

I don’t limit myself to just fiction books though, I read a wide variety of books on subjects like; stock market investment, soft skills, autobiographies, and more.

Read

So why read?  It’s simple, reading refocuses your mind on something different.  Lets say you’ve been writing code all morning, and your stuck on a problem that you can’t fix.  If at lunchtime, for example, you go away from your computer and read a book for 30-45 minutes, when you get back to your desk you will be mentally refreshed.  And in the meantime, the problem you were having earlier in the day has been percolating away at the back of your mind and I can’t tell you how many times I’ve come in and fixed a difficult problem within just a few minutes.

Taking the time top step back and let your mind power down and focus on something else is a very useful technique for relaxing, de-stressing, and ultimately helping to prevent burnout.

Try and read every day… you never know, you might even enjoy it.

 

Spend more time with immediate family, and friends

This is the ultimate technique for preventing burnout, spending time with close friends and family.  Humans are very sociable beings, and benefit a lot from interacting with others.

Being sociable with others can trigger your body to release one of four feel good chemicals; endorphins, oxytocin, serotonin and dopamine.  This will result in a happiness boost, which will help reduce stress, and trigger a chain reaction where you are rewarded more the more you interact with others.  Having strong relationships with work colleagues and also have other untended consequences, including faster career progression and priority when decision-makers are appointing people to interesting projects.

Back to family.  If you’re working all the time, you’re by definition spending less quality time with your significant other (wife, girlfriend, husband, etc).  Spending more time with them will result in a better quality of life, happiness and reduced risk of burnout.

 

Record your progress

If you absolutely must ignore all the prior advice, then please take away the advice given in this last point.  Record your progress.

Time

The most effective way I have found to stay motivated and ward off burnout is to effectively track you time and progress.  Take your freelance project, or whatever you are working on, and break it down into a list of tasks.  Then as you work your way through each task, record how long it took to complete that task and physically tick it, cross it out, or in some way indicate that the task is finished.  Then at the end of each day or the end of each week, take the time out to review the list and see how much progress you have made during that period.  Doing this methodically will help you remember that you are moving forward all the time and getting closer to your goals.

Tracking your forward progress and getting closer to your end goal is the ultimate technique for avoiding burnout.

 

Summary

Following this advice will help restore your work-life balance by making your work time much more focused, giving your brain time to slow down to better absorb new information, and generally will make you happier in daily life thanks to the better relationships you will develop with others who are important to you.  If you absolutely can’t follow the first 4 tips, make sure you at least record your progress so you can see yourself moving forward towards a goal over a period of time.

TypeScript beginners guide

TypeScript is a tool that enables you to write better JavaScript. You may have heard that TypeScript is a superset of JavaScript, but what does that mean? TypeScript is JavaScript. If you know JavaScript already, then you already know JavaScript. You can convert an existing JavaScript file to TypeScript simply by changing the file extension. TypeScript has a very low barrier to entry (you can easily write it using Notepad) and has a small learning curve.

TypeScript has a transpiler, called tsc which transforms (compiles, if you like) your code from TypeScript to JavaScript. If you use any TypeScript paradigms, then your code cannot be understood directly by a JavaScript execution engine (V8 for example). You can, however, enable source maps and debug your TypeScript code directly.

Developers: “Why should I bother using TypeScript?”

When I talk to developers about TypeScript, the first thing they ask me is “Why should I bother using TypeScript?” or “I already understand JavaScript and I’ve been using it for x years, so I don’t feel the need to use it…”.

This is a great platform to explain to people why they should use TypeScript. Not because I’m on some personal mission or because I get paid by Microsoft (I don’t, but that would be awesome by the way) … it is because TypeScript can genuinely help developers write better code.

TypeScript enables developers to write more robust code

TypeScript provides several out of the box features that enable you to write more robust JavaScript;

1. Static typing

Properties, fields, function parameters and more can be decorated (sprinkled) with type declarations, which act as hints to the compiler and ultimately result in compile time type checking.

You can start very simply, by, say, adding a string type to a function parameter.

function print(message:string) {
    //Console log message here
}

This will ensure that any calling method passes a string value as a parameter. This means that should you attempt to pass, for example, a number you will get a compile time error.

If you think type checking can be a hindrance to the dynamic nature of JavaScript, read on.

2. TypeScript is optional, and it takes a back seat

Unlike most other programming paradigms, TypeScript is completely optional. If there is a feature you don’t like, you don’t have to use it. In fact, you can write 100% pure vanilla JavaScript inside a .ts file and never include any TypeScript paradigms and everything will work just fine. If you do encounter compile time errors, TypeScript will still emit your compiled JavaScript… you are not forced to fix the compilation error, unlike other compiled languages like C++ or C# for example.

In TypeScript 1.5+ there is a flag that stops compilation in the event that you encounter an error, should you choose to utilize this feature.

3. TypeScript is free, open source

Not only is TypeScript completely free and open source (even for commercial development), but there is also tooling for all the main operating systems (Linux, Mac, Windows) and is not just limited to the Microsoft stack. You can get TypeScript via NPM, NuGet, or you can download it from GitHub.

4. TypeScript enables developers to write modern JavaScript

Good developers want to use the latest iteration of their tools. They use these tools everyday, so keeping up to date makes sense.

The single biggest frustration for web developers who write JavaScript is cross browser support (is your company still supporting IE8?). TypeScript enables developers to write code against emerging standards whilst maintaining backwards compatibility. TypeScript is technically a transpiler and not a compiler, and it has a bunch of useful transformations to make this possible.

It is fair to say that the ECMAScript standard (the standard from which JavaScript ultimately derives) hasn’t evolved much over the last decade. There has been incremental updates, yes, but there has been a long gap between ES5 and ES6 (about 6 years to be precise). That’s all changed now, as the TC39 committee have committed to releasing revised standards on a yearly basis. In fact, officially, the ES6 standard has been renamed to ES2015, ES7 has been renamed to ES2016, and there will be yearly releases going forward. TypeScript enables developers to utilise these new standards because it provides transformations for many of them.

Example;
TypeScript 1.5 transforms the following ES6 string interpolation code;

var name = "Jon Preece";
var a = "Hello, ${name}";

to ES5 friendly string concatenation;

var name = "Jon Preece";
var a = "Hello, " + name;

Yes, you can use most ES6 features knowing with 100% confidence that the emitted code is widely supported by all decent browsers (IE 7+ at least).

In the interest of fairness, this isn’t true for all ES6 features. For example Promises must be supported natively by the browser (or polyfilled), there are many transformations available… resulting in a lot of developer feel-good. Check out my post Using ES6 features with TypeScript for more transformations.

Ultimately, however, I always recommend to developers that they use the right tools for the job, and that they use the tools that they themselves are most comfortable using. I recommend the same for you. Take the time to evaluate TypeScript, experiment with it, and see if it can be introduced into your daily development workflow. If you are developing a greenfield project, why not introduce it from the beginning?…after all, TypeScript really comes into it’s own when used in a medium to large team environment.

The Basics

Converting a JavaScript file to TypeScript

As briefly mentioned, you can convert a JavaScript file to TypeScript by changing the file extension from .js to .ts. You will have to call upon the TypeScript compiler (known herein as tsc) to emit the resulting JavaScript file for you.

There are several approaches to using tsc depending on your operating system, IDE, and preferences. My operating system of choice, for example is Windows 8.1 with VS Code. You, however, might use Sublime Text on a Mac or Vim on Ubuntu (these are just examples).

Add type declarations to function parameters

The simplest feature of TypeScript to use out of the box, and arguably the best feature, is type declarations, or static typing. You declare the type of a function parameter using the following syntax;

function print(message:string) {
    //Console log message here
}

This is the same code as shown earlier. We want to log a message to the console window, or display a message to the user, or whatever the case is. It’s reasonable to assume that the message will be a sequence or alphanumeric characters… a string.

It might not make any sense to do the following;

//Print '123' to the screen
print(123);

//Print this object to the screen
print({message: "abc" });

The result of calling the function in this matter is unpredictable a best, and at worst could result in an error in your application. By applying the type declaration to the parameter, we can get a warning at compile time that there is a problem.

It is worth mentioning that type declarations are specific to TypeScript, nothing related to the type declaration will be emitted into the final JavaScript. They are a compile time hint.

Type declarations everywhere

Type declarations are not just limited to function parameters. You can include them on properties, fields, and the return value for a function too!

There are other places, like Type Declaration files, but that is out of the scope of this post.

The ‘any’ type

Sometimes, a type isn’t known until runtime. In situations where type isn’t known, you could use the any type;

print(message: any) : string { }

This tells TSC that type is “unknown” and that static analysis is not required or appropriate.

Classes and Modules

By default, TypeScript does not use any sort of Asynchronous Module Defition (AMD) pattern. You may be familiar with RequireJS et al, but the default pattern is the IIFE pattern (you can change this if necessary).

Modules help with code organisation and reduce global scope pollution. Take the following code;

module Printing {
    class Printer {
        constructor(private startingValue: number) {

        }
        print(message: string): string {
            //Do something

            return "";
        }
    }
}

TypeScript will generate a root object, named Printing. This object will be added to the global scope. This is the module, and you can have as many modules in your application as you like.

Anything nested inside a module will be added to it as an object. So in this case, the Printer object will be added to the Printing object. This is great because now only 1 object has been added to the global scope, instead of two (reducing conflicts with your code and other external dependencies).

Constructors

Constructors are a feature of ES6, called when an object is instantiated. You can include your set up logic here for the specific instance. You can also pass values to the constructor and get full IntelliSense support;

module Printing {
    class Printer {
        private startingValue: number;

        constructor(startingValue : number) {
            this.startingValue = startingValue;
        }
    }
}

Understanding constructor parameters

Constructor parameters are slightly different compared to other programming languages. In the above example, we have a private field named startingValue, and we set it’s value to whatever the value of the startingValue constructor parameter is;

this.startingValue = startingValue

This is unnecessary in TypeScript… TypeScript provides some syntactic sugar to sweeten this up.

The following code is valid TypeScript;

module Printing {
    class Printer {
        constructor(private startingValue : number) {

        }
    }
}

This is valid because under the hood TypeScript created a variable on the class with the name startingValue inside the constructor and assigned the value automatically. Unless you explicitly apply an access modifier to the parameter, it is public. You can add the private access modified to make that parameter only accessible with the class itself and not externally.

Summary

TypeScript is a tool that enables developers to write more robust, scalable, maintainable, team friendly JavaScript code. Large JavaScript applications tend to descend into a spaghetti, landmine-ridden battlefield that can only be maintained by a single developer who understands all the moving parts. With TypeScript, those days are over. Getting started with TypeScript is as simple as renaming a file, sprinkling on a few type annotations, and reaching out to tsc via the command-line or using your editor of choice (on the operating system of your choice!).

Writing AngularJS 1.x with TypeScript

AngularJS 1.x is a front end JavaScript framework that has gained huge traction and popularity in the development community. AngularJS greatly simplifies previously hard tasks like two-way data binding, templating, the MVC design pattern, despendency injection and more. Using TypeScript, we can create more robust and scalable AngularJS code to deliver the ultimate user experience whilst avoiding the traditional spaghetti code nightmare that JavaScript applications can often descend into.

AngularJS version 1.x is written in JavaScript. Its successor, Angular 2.x, is written using TypeScript. It was originally going to be written in Google’s propriety language AtScript, but the teams have merged the projects and are now working together on TypeScript.

All the code for this project is available to view on GitHub.  There is also a demo project on GitHub pages.

Note: This tutorial assumes you have some knowledge of Angular 1.x.

Note: This tutorial tries to stay editor independent, meaning the concepts apply to TypeScript specifically and not to an editor. When necessary, screenshots showing relevant information will be of VS Code. VS Code is a free, cross platform editor hat has excellent built in TypeScript support.

Type Declaration Files

Also known as “Type Definition Files”, these are files that have the extension .d.ts and contain all the information to, for lack of a better word, describe the structure of a JavaScript library.

“When using an external JavaScript library, or new host API, you’ll need to use a declaration file (.d.ts) to describe the shape of that library.”

Referenced from the TypeScript handbook.

You will need to reference the AngularJS type declaration files for AngularJS to get full auto-completion/intelli-sense support and to be able to fully ultilise TypeScript’s static typing functionality.

Initial Setup

We will develop a simple application based on the HaveIBeenPwned API.  We’ll cover the specifics of that shortly.

Task runner

First, there’s some configuration in VS Code that we need to do. This same concept applies regardless of your editor.

  1. Create a new directory somewhere, say on your desktop, called AngularJSAndTypeScript101.
  2. Open VS Code, click File > Open Folder… and point to the directory you just created.
  3. Click File > New File… and add the following code…
{
    "compilerOptions": {
        "module": "none",
        "target": "ES5"
    },
    "files": [
        "app.ts"
    ]
}

Now you need to create a task runner. A task runner will transpile your TypeScript each time you run the task.

  1. In VS Code, add a new file called ‘app.ts’
  2. Press Ctrl+Shift+B to trigger the task runner. VS Code will bring up a little message at the top of the screen telling you that there is no task runner configured. Click the “Configure Task Runner” button. VS Code will create a tasks.json file automatically for you.
  3. Change the args property from HelloWorld.ts to app.ts. We will revisit this later to take a wildcard selector.

Open your app.ts file, and add the following code;

class Hello{
    constructor() {
        console.log("Hello!");
    }
}

Press Ctrl+Shift+B to kick off the build. Once complete (it takes < 1 second on a half decent machine) you will notice a new file has been added to the project, called app.js.

Switch to split view by clicking the little side by side window icon on the top right hand side of the screen, and in the right hand pane open the newly generated app.js file. This is optional of course, if you don’t care about the compiled JavaScript, that’s absolutely fine. Sometimes it is nice to be able to see it.

Compile on save

If you have Node.JS tools installed, and you don’t want the hassle of pressing Ctrl+Shift+B every single time you want to compile, you can do the following to enable compile on save;

  1. Open a Node.js command prompt and change directory to the root folder for your project
  2. Type the following command;
tsc -w app.ts

TypeScript will listen for changes to your file, and transpile it automatically every time you make a change (and save).

In addition, VS Code has an Auto-Save feature.  To enable it, click File > Auto Save.  If you have the TypeScript file and JavaScript files open side by side, the JavaScript file will periodically update without the need to manually save changes.

Adding AngularJS

Next, we need to add AngularJS to our project. You can do this using either Node Package Manager (NPM) or manually by download the appropriate files and adding them into your project folder. VS Code is still pretty young at this point (July 2015) and support for pulling in packages is non-existent.

Using NPM and Bower

If you don’t already have bower, a client side package manager, installed you can use the following command from a Node.js command prompt;

npm install -g bower

This will install bower globally on your machine. Now add a new file to your project called bower.json and add the following configuration:

{
    "name": "AngularTypeScript101",
    "version": "1.0.0",
    "dependencies": {
        "angular": "~1.4.0",
        "angular-route": "~1.4.0"
    }
}

This will bring in version 1.4.0 of AngularJS. Feel free to change this to use whatever the current version of Angular 1.x is.

Now run the following command, again from your Node.js command prompt;

bower install

This will download the packages and add them to your project folder under a sub directory called bower_components.

Using Visual Studio NuGet

If you happen to be following along using Visual Studio (full fat), you can easily install these packages using the following commands; (which you run via the Package Manager Console PowerShell window.

install-package angularjs.core
install-package angularjs.route

You can use the same mechanism to easily update the packages too (using the update-package command).

Manually

You can, of course, download Angular manually and add it to your project. You will find the latest version on the AngularJS website.

You will probably find it easier to maintain your packages over time using Bower.

Adding the type declaration files

Thankfully TypeScript has a built in mechanism for adding type declaration files to your project. This is done using the TypeScript Definition Manager.

Using the TypeScript Definition Manager

You install the TypeScript definition manager using NPM. Open a Node.js command prompt and enter the following;

npm install -g tsd

This will install the TypeScript definition manager globally.

The TypeScript declaration files are fully open source and can be found on the Definitely Typed GitHub repo. The AngularJS declaration files can be installed using the following command;

tsd install angular
tsd install angular-route

This will create a new directory called typings with a sub directory called angularjs. Within will be the two Angular declaration files you just installed.  As AngularJS has a dependency on jqLite, the declaration files for jQuery will also be pulled in.

Using NuGet

You can install the declaration files by running the following command in the Package Manager Console PowerShell window.

Install-Package angularjs.TypeScript.DefinitelyTyped

Manually

You could of course just download the declaration files directly from the GitHub repository and copy them into the correct folder. (typings/angularjs)

Referencing the declaration files

Now that you have installed the declaration files for Angular, flip back to your app.ts file and add the following code;

class Hello{
    constructor() {
        angular.module("HelloApp", []);
    }
}

No matter which technique you use to compile your JavaScript, you will get the following error message; (I believe that support for this is going to be vastly improved in future versions of VS Code)

Cannot find name 'angular'

That’s fine. All we need to do is tell TypeScript about our declaration files by adding the following code to the very top of the code file;

/// <reference path="typings/angularjs/angular.d.ts" />
/// <reference path="typings/angularjs/angular-resource.d.ts" />

Many modern editors, including Visual Studio and Webstorm, don’t require this additional step because they’re smart enough to check for the existence of the typings sub folder.

Add the references, press Ctrl+Shift+B (or Save) to recompile and the error should go away.

Web Server

AngularJS, like most other JavaScript libraries, doesn’t work properly when viewed directly from the file system. If you open a file on your desktop, the path will be something like this;

file:///C:/Users/jon.preece/Desktop/AngularTypeScript101/index.html

Chrome, Firefox, and most other web browsers won’t allow you to consume other markup files due to security concerns (CORS). With that in mind, you will need some sort of web server to serve up the files so they can be previewed properly in the browser.

VS Code does not have a built in HTTP server, so we must use another mechanism. If you’re using Visual Studio (full fat), then you can use the built in server and skip this step.

Http-Server

There is a very nice HTTP server on NPM that is more than sufficient for our needs. To install it, run the following command from a Node.js command prompt;

npm install -g http-server

This will install a simple HTTP server globally on your machine. To, run the server change directory (cd) to your root project folder and type the following command;

http-server -o --cors

There are lots of options you can pass in to customize the behaviour, but the default configuration will be enough for us.

The default address is http://localhost:8080.

IIS

You can configure IIS to host the site for you. To do so, follow these steps;

  1. Open the Internet Information Services (IIS) manager (inetmgr.exe)
  2. Add a new site, call it “HaveIBeenPwnd”
  3. Point the physical path to your project folder.
  4. Enter the following host name: haveibeenpwnd.local
  5. Open your hosts file (C:\Windows\System32\Drivers\etc\hosts) in your favourite text editor (VS Code if you like!)
  6. Add the following line;

127.0.0.1 haveibeenpwnd.local

Open your web browser and point to haveibeenpwnd.local.

Sample Project

In order to demonstrate the concepts thoroughly, we are going to develop a simple project that utilizes the HaveIBeenPwned API. If you are not familiar with this service, HaveIBeenPwned is developed and maintained by developer security expert Troy Hunt. HaveIBeenPwned checks to see if your email address or username has been compromised in a data breach. The HaveIBeenPwned API is a way to access this wealth of information programmatically via a Restful endpoint.

The project shows how to write the following AngularJS concepts in TypeScript;

  • Controllers
  • Directives
  • Filters
  • Services
  • Routing

The sample project is certainly not feature complete. In fact, all the samples are about as simplistic as they can possibly be… but that’s intentional.

Style Guide

Whenever I write any AngularJS code, I always follow (with only minor deviation) the Angular JS Style Guide, which was written by John Papa. It is an excellent guide and I highly recommend that you check it out.

If you’ve never used a style guide before, just remember, a guide is exactly that… it is a guide. Use what bits make sense for your project.

Project Structure

The structure of the project is as follows;

/client
    /HaveIBeenPwned 
        /controllers
        /directives
        /filters
        /models
        /services
        /views
/content
    /css
/typings
    /angularjs
    /jquery
index.html
package.json
tsconfig.json

We will add in the appropriate files as we go along.

Referencing declaration files

At the time of writing, VS Code does not recognise the fact that you have added type declaration files to your project. I suspect that in the future this will be resolved, but for now you have to manually reference the typings files directly in your TypeScript files.

Type declaration files are referenced using a special triple forward slash (///) syntax;

/// <reference path="../../typings/angularjs/angular.d.ts" />
/// <reference path="../../typings/angularjs/angular-route.d.ts" />

VS Code will now load in these files and provide IntelliSense/auto-complete based on the declarations made in each file. If you are using Visual Studio, you can skip this step.

Note: It is out of the scope of this post to discuss how type declaration files work, we will cover this is a future post.

Note: Please ensure that every TypeScript file that you write for this project has these references at the top.

Modules

When not using any sort of AMD, the default pattern that TypeScript generates is the IIFE pattern. Take the following TypeScript code;

class App{
    "use strict";
}

TypeScript generates the following;

var App = (function () {
    function App() {
    }
    return App;
})();

This code is fine. It works. But there is one major problem. TypeScript has placed a variable on the global scope. A variable called App now exists on the window object. In our case, it is very unlikely that this would impact any other part of the application, but in larger projects with lots of scripts and external dependencies, this is a common problem. TypeScript introduces modules to help avoid this problem and also help with the organization of your code.

Usage;

module HaveIBeenPwned{
    "use strict";	
     class App{
				
    }
}

TypeScript generates the following;

var HaveIBeenPwned;
(function (HaveIBeenPwned) {
    "use strict";
    var App = (function () {
        function App() {
        }
        return App;
    })();
})(HaveIBeenPwned || (HaveIBeenPwned = {}));

A global object is still added to the window object, but now our App class is added as a property on the HaveIBeenPwned object. Everything wrapped inside a module with that name will be added as properties to it.

Example;

module HaveIBeenPwned{
    "use strict";
     class Routes {
		
    }
}

TypeScript generates the following code;

var HaveIBeenPwned;
(function (HaveIBeenPwned) {
    "use strict";
    var Routes = (function () {
        function Routes() {
        }
        return Routes;
    })();
})(HaveIBeenPwned || (HaveIBeenPwned = {}));

As the HaveIBeenPwned object already exists, the Routes object will simply be added to it.

Add a file called app.module.ts to the HaveIBeenPwned folder, and add the following;

module HaveIBeenPwned{
    "use strict";	

    angular
        .module("HaveIBeenPwned", ["ngRoute"]);
}

This will initialize the AngularJS app, and pull in the Routing module (ngRoute).

Point to take away: Modules help organise your code and stop the global scope from becoming polluted.

Dependency injection for functions

Routing is the perfect place to start leveraging TypeScript in our application. Routing configuration tells AngularJS where to find our views, and which controller to use for a specific path navigated to.

Routing is passed to AngularJS as configuration, using the config function.

Traditional AngularJS code;

angular
    .module("HaveIBeenPwned")
    .config(["$routeProvider", function($routeProvider) { /*Routing goes here*/ } ]);

The traditional JavaScript way is to pass a list of dependencies as a string array, then the final parameter is a function in which those dependencies are injected. This is primarily to support minification, so that dependencies can still be injected once all the variable names have been changed.  TypeScript enables us to write this type of code in a much cleaner fashion.

You might be aware that with AngularJS, you can directly inject dependencies into a function by attaching a property called $inject.

Example;

function routes(){
    
}
routes.$inject = ["$routeProvider"]

I’ve found that this is rarely used in practice…perhaps due to lack of knowledge of the feature or simply because most of the documentation shows how to use the traditional style shown above.

TypeScript lends itself well to using the $inject function.

Add a new file called app.route.ts to the HaveIBeenPwned directory, and add the following code;

/// <reference path="../../typings/angularjs/angular.d.ts" />
/// <reference path="../../typings/angularjs/angular-route.d.ts" />

module HaveIBeenPwned {
    "use strict";

    function routes($routeProvider: ng.route.IRouteProvider){
    }

    routes.$inject = ["$routeProvider"]

    angular
        .module("HaveIBeenPwned")
        .config(routes);
}

Most of this code should look familar to you. Technically, there is absolutely no reason why you can’t use the traditional AngularJS style for passing dependencies to a function, I just find this syntax is much cleaner.

Type declarations on function parameters

With this code we encounter the first usage of a type declaration;

function routes($routeProvider: ng.route.IRouteProvider){

First, take a look at the compiled output;

function routes($routeProvider) {

It’s important to note that the type declarations are a TypeScript specific feature. Type declarations are not a feature of JavaScript. Type declarations are there to provide IntelliSense/auto-complete support and code refactoring, such as renaming of variables.

All AngularJS declarations can be found in the ng namespace. In this case, we need the route provider to define our routing configuration. Routing is not part of the core AngularJS library, in fact it was split out into its own module named ngRoute. All declarations for routing can be found in the ng.route namespace.

Add the following code to the routes function. Please type the code rather than copy/paste it.

$routeProvider
    .when("/search", {
        templateUrl: "/client/HaveIBeenPwned/views/_search.html",
        controller: "SearchController",
        controllerAs: "vm"
    })
    .otherwise({
        redirectTo: "/search"
    });

You should immediately see the benefits now.

IntelliSense On Routing Provider

 

Not only do you get IntelliSense will full documentation, but you also get the added benefit of compile time checking;

Compile Time Checking

 

In this case I have provided the wrong number of arguments to the function… I get instant visual feedback on that.

Note: My code still compiled. Even though the code I wrote in the above examples was wrong, the JavaScript was still generated and I could call that in the browser (all be it, with unpredictable behaviour).

To complete the routing, add the following code to the routes function;

$routeProvider
    .when("/search", {
        templateUrl: "/client/HaveIBeenPwned/views/_search.html",
        controller: "SearchController",
        controllerAs: "vm"
    })
    .otherwise({
        redirectTo: "/search"
    });

Don’t worry about creating the views or controllers at this point. We will do that later.

Dependency injection for classes

Injecting dependencies into classes is slightly different. With functions, we inject dependencies by tacking on an $inject variable to the function.  With classes, we instead have a static variable with the same name.

Example;

static $inject = ["PwnedService"];

As with most other programming languages, static is not instance specific, it applies to all instances. The generated JavaScript we end up is the same code as was generated for functions.

Example;

SearchController.$inject = ["PwnedService"];

Add a new file to the controllers directory, named search.ts. This will eventually be used to call an AngularJS service, which will call out to the HaveIBeenPwned API and return the result.

module HaveIBeenPwned {
    class SearchController {
    static $inject = ["PwnedService"];
        constructor(private pwnedService: IPwnedService) {
            
	}
    }    
    
    angular
       .module("HaveIBeenPwned")
       .controller("SearchController", SearchController);
}

Note: We don’t have a IPwnedService yet, we’ll get to that in a minute.  Ignore any compile time errors your editor might be giving you at this point.

Constructors

Constructors are a feature of ES6, called when an object is instantiated. Again, constructors work the same in JavaScript as they do in any other programming language.

In the interest of cross-browser support, TypeScript generates a function with the same name as the containing function to indicate that it’s a constructor.

Example of an empty constructor in TypeScript;

class SearchController {
    constructor() {
        //constructor logic	
    }
}

and the resulting JavaScript

var SearchController = (function () {
    function SearchController() {
        //constructor logic	
    }
    return SearchController;
})();

Again, constructors are now natively supported in ES6 so when targeting that version the code won’t be transpiled.

Understanding constructor parameters

Assume the following code;

constructor($http : ng.IHttpService) {  
}

If you wanted to reuse the $http variable in other functions in your class, you might be tempted to do the following;

private _httpService: ng.IHttpService;

constructor($http : ng.IHttpService) {
    this._httpService = $http;
}

You should understand that the act of assigning the constructor parameters to a private variable (like you might do in other languages) is redundant in TypeScript. Under the hood TypeScript does this for you.

The following code is valid TypeScript;

constructor($http : ng.IHttpService) {
    
}

someOtherFunction() {
    this.$http.get(...);   
}

This is valid because under the hood TypeScript created a variable on the class with the name $http inside the constructor and assigned the value automatically.

The transpiled JavaScript

var PwnedService = (function () {
    function PwnedService($http) {
        this.$http = $http;
    }
    ...
}

Unless you explicitly apply an access modifier to the parameter, it is public. You can add the private access modified to make that parameter only accessible with the class itself and not externally. I typically mark all my constructor parameters a private unless I need them to be accessible externally, although I’m not aware of any performance or other impact of not doing this.

Make objects visible to others using ‘Export’

The code shown in the previous example creates a controller called SearchController, using the ES6 class feature. By default, the class is not accessible to any other external objects.

SearchController Is Inaccessible

 

SearchController is not defined because it is not accessible.  It is desirable to expose certain objects so that they can be consumed in other places. Generally I only expose objects that have to be exposed, but there is no hard and fast rule on this and no performance impact that I’m aware of, other than a slightly busier namespace object.

To expose a class, interface, or function to other objects, use the export keyword.

export class SearchController {

This makes a subtle change to the generated code. This following line is added after the SearchController IIFE.

HaveIBeenPwned.SearchController = SearchController;

Now re-running the code in developer tools results in the following;

SearchController Is Accessible

 

Note: This is a transformation provided by TypeScript and there is not a comparable feature in ES6 at the time of writing (July 2015).

Interfaces

Interfaces are contracts that state that an object will contain the functions and properties defined therein.

Simple interface declaration;

module HaveIBeenPwned{
    export interface IPwnedService {
		
    }
}

Note that the above code results in zero JavaScript output. Why? Simple. JavaScript does not support interfaces or contracts of any kind. JavaScript is a dynamic language and properties/functions can be added to any object at any time. Interfaces are syntactic sugar provided by TypeScript to support IntelliSense/auto-complete and compile time checking.

Add the following function to the interface (we will discuss promises shortly);

check(address:string) : ng.IPromise<{}>;

An interface function has no implementation, that will be defined on the implementation class.

Naming conventions

The official TypeScript style guide clearly states that interface names shouldn’t be prefixed with I. I come from a C# background and I’m simply too stuck in my old ways to adhere to this recommendation, and I know a lot of folk feel the same. My advice to you, choose whatever naming convention makes the most sense to you, and stick with that.

Note also that I like to keep my interfaces and classes in the same physical file as the class that implements it (as long as it’s not too long). I recommend, again, that you pick an approach that works best for you, and then stick with that.

Working with interfaces

You can implement an interface on a class using the implements keyword as follows;

class PwnedService implements IPwnedService

Flesh out the class as follows;

module HaveIBeenPwned{	
    export interface IPwnedService {
	check(address:string) : ng.IPromise<{}>;
    }
	
    class PwnedService implements IPwnedService {
		
 	static $inject = ["$http"];
	constructor(private $http : ng.IHttpService) {			
	}
		
	check(address:string) : ng.IPromise<{}> {			
	}		
    }
	
    angular
	.module("HaveIBeenPwned")
    	.service("PwnedService", PwnedService);
}

If you add a property or a function to the interface, or change it, and forget to update the implementation class, you will get a compile time error.

You can also use interfaces with JSON. Take the following TypeScript example;

var a = <IMyInterface>{
  someValue: 'Hello'  
};

Instead of having to instantiate a class that implements the IMyInterface interface, you can exploit JavaScript’s dynamic nature and pass in a raw JSON object. Specifying the interface before the object definition is a hint to TypeScript of what you are doing. In return, you will get full IntelliSense/auto-complete when defining the object, as well as compile time checking for all usages of that object.

Finally, you can also derive interfaces from other interfaces. Interface inheritance if you like.

Example;

export interface IEnterKeyPressAttributes extends ng.IAttributes {
    ngEnter: string;   
}

The above example shows a new interface, that uses the extends keyword to derive from the ng.IAttributes interface. The IEnterKeyPressAttributes interface has all the same methods and properties as the ng.IAttributes interface, and can add its own too.

Promises

Promises are a new shiny feature in ES6. Unfortunately, promises need to be supported either natively by the browser or via a polyfill. AngularJS has promises baked right in to many components. You can use promises indirectly, via a common service such as $http, or you can create promises directly using $q.

Example;

class PwnedService {
    constructor($q : ng.IQService) {
    }
    
    check() : ng.IPromise<{}> {
        var defer = this.$q.defer();
        //Do something useful
        //Then call either defer.resolve() or defer.reject()        
        return defer.promise;   
    }   
}

Or using $http:

class PwnedService {
    constructor($http : ng.IHttpService) {        
    }
    
    check() : ng.IPromise<{}> {
        return this.$http.get("https://haveibeenpwned.com/api/v2/breachedaccount/" + address);    
    }   
}

In the interest of avoiding the promise anti-pattern, I tend to use promises through other components, rather than directly and just chain on callbacks.

Anyway, whichever technique you prefer, you will end up defining ng.IPromise<{}> as the return type for the parent function (see check method above).

Technically, in TypeScript world, this is wrong. I’ve basically said that the promise will return “an object” as yet to be determined. However, I know what the correct type is;

ng.IPromise<ng.IHttpPromiseCallbackArg<BreachedAccount[]>>

Why didn’t I just write that in the first place? I simply don’t like the use of nested generics. Sure I lose a little bit of IntelliSense and compile time checking, but this has never been an issue for me.

Calling a function that returns a promise

Flip back to the SearchController.ts file and add the following function;

private breachedAccounts : BreachedAccount[];

submit(address: string) {
    this.pwnedService.check(address).then((result : ng.IHttpPromiseCallbackArg<{}>) =>{
        this.breachedAccounts = result.data;
    });
}

There’s no magic here, the code is the same as what you would write in JavaScript. The only difference is that you get IntelliSense support on the result object.

The any type

Sometimes, a type isn’t known until runtime. This is especially true when calling a Web API back end. The response you get when an exception is thrown by the runtime (unhandled) is very different than an exception thrown by your code.

If your code is really broken, you might get a 500 response with a yellow screen of death.  The yellow screen of death is pure HTML markup only with on JSON representation.

A runtime exception object might look like this;

{
    ...
    "Exception": {
        "Message": "Object reference not set to an instance of an object."   
    }   
    ...
}

Also, your code might return the following ‘graceful’ exception when an error is encountered.

{
    ...
    "exception": {
        "message": "The request was malformed"   
    }   
}

The difference between these two responses. Casing. By default, the .NET runtime returns responses in Pascal Case. You have to jump through several hoops to get the response to return in Camel Case, a step all so often not done fully.

In situations where type isn’t known, use the any type;

submit(address: string) {
	this.pwnedService.check(address).then((result : ng.IHttpPromiseCallbackArg<{}>) =>{
		this.breachedAccounts = result.data;
	})
	.catch((reason : any) => {
		alert(reason.Message || reason.message);	
	});
}

This will put an end to any compile time nagging.

–noImplicitAny

In the above example, we have explicitly told TypeScript that reason does not have a type. With that in mind, do you think it’s equally valid to leave off the type declaration altogether? Absolutely, because the static typing in TypeScript is optional.

The following code is valid TypeScript;

submit(address: string) {
	this.pwnedService.check(address).then((result : ng.IHttpPromiseCallbackArg<BreachedAccount[]>) =>{
		this.breachedAccounts = result.data;
	})
	.catch((reason) => {
		alert(reason.Message || reason.message);	
	});
}

No type was given for reason. However, because the type is any is the type declaration files, it is still explicit.

Take the following code;

var y;

In this code there is no way that TypeScript can know what the type is, therefore it is implicitly any.

If you want to prevent implicit use of the any type, you can pass in the --noImplicitAny flag to the TypeScript compiler when you execute it.

tsc --noImplicitAny -w

Model classes

Add the following model class to the models folder;

module HaveIBeenPwned {
    export class BreachedAccount {
	Title: string;
	Name: string;
	Domain: string;
	BreachDate: string;
	AddedDate: string;
	PwnCount: number;
	Description: string;
	DataClasses: string[];
	IsVerified: boolean;
	LogoType: string;
    }
}

This class will be used to map the response from the HaveIBeenPwned api back to an object for strong typing and compile time support. This is the exact data as returned by the API, and it returns the properties in Pascal Case, so we will have to use that here (or write ugly mapping code, but I’d rather avoid that).

Search View

Add a new file to the views folder called _search.html. Add the following markup;

<form ng-submit="vm.submit(vm.emailAddress)">
    <div>
	<label>Enter your email address:
	    <input type="email" id="emailAddress" name="emailAddress" ng-model="vm.emailAddress" placeholder="[email protected]" ng-enter="vm.submit(vm.emailAddress)">
	</label>
    </div>
    <button type="submit">Check Now</button>
</form>
<table>
    <thead>
 	<tr></tr>
    </thead>
    <tbody>
	<tr ng-repeat="breachedAccount in vm.breachedAccounts">
	    <td>{{breachedAccount.Title}}. <div ng-bind-html="breachedAccount.Description | asHtml"></div></td>
	</tr>
    </tbody>
</table>

There are two things of particular interest here, the use of a custom AngularJS directive called ngEnter, and a custom filter called asHtml.

ngEnter directive usage;

<input type="email" id="emailAddress" name="emailAddress" ng-model="vm.emailAddress" placeholder="[email protected]" ng-enter="vm.submit(vm.emailAddress)">

asHtml filter usage;

<div ng-bind-html="breachedAccount.Description | asHtml"></div>

We will need to create both of these before we can continue.

Filters

Filters are defined as follows;

A filter formats the value of an expression for display to the user.

When we call out to the HaveIBeenPwned API, it will return an array of BreachedAccount to us. Each BreachedAccount will have a property called Description. This description is HTML markup that will contain links and other interesting stuff.

By default, AngularJS (rightly so) will encode the string to make it safe. The HTML markup will be rendered on the page, rather than added to the DOM and executed.

We want to override this default behaviour, and render the HTML instead. Generally speaking, I wouldn’t recommend this approach because of the security implications (injection of potentially dangerous script) but in this small contrived example, it is fine. And lets be honest, it is unlikely that Troy Hunt, the security guy, is going to have some malicious script sent to us. Of course, you should never take this for granted.

Anyway, add a file called asHtml.filter.ts to the filters directory. In terms of their behaviour, filters are much like the routing code we wrote earlier.

Add the following code;

module HaveIBeenPwned{
    "use strict";

    export function asHtml($sce : ng.ISCEService) {
	return (text : string) => {
		return $sce.trustAsHtml(text);
	}
    }
	
    angular
	.module("HaveIBeenPwned")
	.filter("asHtml", asHtml);
		
    asHtml.$inject = ["$sce"];
}

Filters are defined by calling the filter function, passing in the name of the function as a string, and a reference to the function.

In order to force AngularJS to skip over encoding the description, we need to inject ng.ISCEService, and call the trustAsHtml method. The method will be called once the description value is known, and for each instance of BreachedAccount in the array.

Directives

Now we need a new AngularJS directive called ngEnter. The purpose of this attribute, which will be attached to a text input field, will be to call a function on our controller when the user presses Enter on their keyboard. This means that the user won’t have to click the left button on their mouse to perform a search, they can do it straight from the keyboard. The beauty of directives is that they can easily be used throughout your application.

The ngEnter attribute will take the name of the method to invoke when the user presses Enter.

Directives are slightly more involved than filters. A typical directive in AngularJS consists of a link function, and a require variable (which determines how the directive is used).

If you want to learn about directives in depth, The nitty-gritty of compile and link functions inside AngularJS directives by Jurgen Van de Moere is a good read.

Start by defining a class, called EnterKeyPressDirective, and implement the ng.IDirective interface;

class EnterKeyPressDirective implements ng.IDirective {
    
}

Doing this doesn’t give us much. In fact, if you take a look at the type declaration file all the functions and properties are optional;

interface IDirective {
    compile?: IDirectiveCompileFn;
    controller?: any;
    controllerAs?: string;
    bindToController?: boolean|Object;
    link?: IDirectiveLinkFn | IDirectivePrePost;
    name?: string;
    priority?: number;
    replace?: boolean;
    require?: any;
    restrict?: string;
    scope?: any;
    template?: any;
    templateUrl?: any;
    terminal?: boolean;
    transclude?: any;
}

However, I’d say it is a pretty good practice to include the interface.

The link function is called after the compile function at runtime. The link function takes 4 parameters and doesn’t return a value;

link($scope: ng.IScope, elm: Element, attr: ng.IAttributes, ngModel: ng.INgModelController): void

The value of ngEnter (the method to invoke) will be passed to us via the attr parameter. The problem is, ng.IAttributes knows nothing about ngEnter.

Create a new interface that extends ng.IAttributes, and add the ngEnter property to it;

export interface IEnterKeyPressAttributes extends ng.IAttributes {
    ngEnter: string;
}

Now replace ng.IAttibutes with IEnterKeyPressAttributes;

link($scope: ng.IScope, elm: Element, attr: IEnterKeyPressAttributes, ngModel: ng.INgModelController): void {
    
}

And flesh out the function as follows;

var element = angular.element(elm);
element.bind("keydown keypress", (event: JQueryEventObject) => {

    if (event.which === 13) {
        $scope.$apply(() => {
            $scope.$eval(attr.ngEnter);
        });

        event.preventDefault();
    }

});

We are using jqLite to subscribe to the elements keydown and keypress events. Once a key is pressed, ensure it is the enter key, and then call the function as defined by the ngEnter variable.

Also, we need to tell AngularJS that this directive requires an instance of ngModel and to restrict the directive so that it can only be used as an attribute. Add the following code to the class;

require = "?ngModel";
restrict = "A";

We finish with a hack. I’m not sure if this is a bug in AngularJS, or expected behaviour.

Each time our directive is required, we need to ensure that a new instance of that directive is created. Add the following static function;

static instance(): ng.IDirective {
    return new EnterKeyPressDirective();
}

For reasons unclear, we have to take care of the leg work of creating a new instance of the directive, because AngularJS doesn’t seem to take care of that for us.

So when defining the directive, pass in the instance function instead of the class. This will then be called each time AngularJS needs an instance of the directive, which will in turn ensure a new instance is created;

angular
    .module("HaveIBeenPwned")
    .directive("ngEnter", EnterKeyPressDirective.instance);

We’re good to go.

Final steps

Add a new file to the root folder called index.html. Also add a CSS file to the css folder, under content, called site.css.

Open index.html and add the following markup;

<html>
    <head>
	<link href='http://fonts.googleapis.com/css?family=Roboto' rel='stylesheet' type='text/css'>
	<link href="content/css/site.css" rel="stylesheet"/>
    </head>
    <body ng-app="HaveIBeenPwned">
	<h1>Have I Been Pwned?</h1>
		
	<div class="container" ng-view>
			
	</div>
		
	<script src="bower_components/angular/angular.js"></script>
	<script src="bower_components/angular-route/angular-route.js"></script>
	<script src="client/HaveIBeenPwned/app.module.js"></script>
	<script src="client/HaveIBeenPwned/app.route.js"></script>
	<script src="client/HaveIBeenPwned/services/pwnedservice.js"></script>
	<script src="client/HaveIBeenPwned/controllers/search.js"></script>
	<script src="client/HaveIBeenPwned/models/breachedaccount.js"></script>		
	<script src="client/HaveIBeenPwned/filters/asHtml.filter.js"></script>	
	<script src="client/HaveIBeenPwned/directives/search.directive.js"></script>	
    </body>
</html>

This will pull in all the relevant scripts, fonts and styles. It will also specify the name of the app (using the ngApp directive) and the view (using the ngView directive).

Open site.css and add the following styles;

body{
    font-family: 'Roboto', sans-serif;
}
h1{
    font-size: 24px;
    text-align: center;
}

.container{
    width:50%;
    margin-left:auto;
    margin-right:auto;
}

input {
    width:100%;
    height:35px;
    font-size: 16px;
    margin:10px auto;
}

button {
    background: #25A6E1;
    background: -moz-linear-gradient(top,#25A6E1 0%,#188BC0 100%);
    background: -webkit-gradient(linear,left top,left bottom,color-stop(0%,#25A6E1),color-stop(100%,#188BC0));
    background: -webkit-linear-gradient(top,#25A6E1 0%,#188BC0 100%);
    background: -o-linear-gradient(top,#25A6E1 0%,#188BC0 100%);
    background: -ms-linear-gradient(top,#25A6E1 0%,#188BC0 100%);
    background: linear-gradient(top,#25A6E1 0%,#188BC0 100%);
    filter: progid: DXImageTransform.Microsoft.gradient( startColorstr='#25A6E1',endColorstr='#188BC0',GradientType=0);
    padding:8px 13px;
    color:#fff;
    font-family:'Helvetica Neue',sans-serif;
    font-size:17px;
    border-radius:4px;
    -moz-border-radius:4px;
    -webkit-border-radius:4px;
    border:1px solid #1A87B9;
    cursor: pointer;
}               

tr:nth-child(even){
    background-color:#eee;
}

tr td {
    padding:10px;
}

It’s going to look functional, not beautiful.

HaveIBeenPwned

 

Enter the test email address, [email protected] and press Enter. You should get some sample data regarding breaches for that account.

Summary

We looked, at a high level, at how to use TypeScript and AngularJS. We looked modules, dependency injection, promises, filters and directives… the most common components of AngularJS 1.x applications. TypeScript’s static typing functionality makes compile time checking and refactoring possible, whilst allowing us to exploit ES6 goodness, resulting in cleaner more maintainable code. We developed a sample application, that made us of the HaveIBeenPwned API to demonstrate these concepts.

Using ES6 features with TypeScript

TypeScript is a transpiler

The TypeScript compiler converts your code from TypeScript, which is a subset of JavaScript, to TypeScript.

Compiler vs. Transpiler

There is some confusion about the difference between a compiler and a transpiler. A compiler takes your code and turns it into something very different, a whole new language.

A good example is with a high level language such as C# or Visual Basic. When you write code and build it, the compiler (either csc.exe [C# compiler] or vbc.exe [Visual Basic compiler] in this case) takes your code and turns it into Intermediate Language (IL).

Example C# code;

private static void Main(string[] args)
{
    Console.WriteLine("Hello, World!");
}

And the compiled code (as seen using ILDasm.exe);

.method private hidebysig static void  Main(string[] args) cil managed
{
  .entrypoint
  // Code size       13 (0xd)
  .maxstack  8
  IL_0000:  nop
  IL_0001:  ldstr      "Hello, World!"
  IL_0006:  call       void [mscorlib]System.Console::WriteLine(string)
  IL_000b:  nop
  IL_000c:  ret
} // end of method Program::Main

The above code is certainly not C#. The C# has been changed into a whole new language.

A transpiler takes your code and changes it. But it’s still in the same language that you started out with. TypeScript is JavaScript, infact, TypeScript is a subset of JavaScript. When the TypeScript compiler runs over your code, it reads in TypeScript (which is JavaScript) and outputs JavaScript. The end resulting language is the same as what you started out with.

The following TypeScript code is completely valid;

(function() {
    console.log("Hello, World!");
});

And the resulting transpiled JavaScript code;

(function() {
    console.log("Hello, World!");
});

Its the same! This is an oversimplification, but the point is correct.

Take the following example, which uses classes (a feature of ECMAScript 6);

"use strict";
class Hello {
    constructor() {
        console.log("Hello, World!");
    }
}
var hello = new Hello();

And the resulting JavaScript transpiled code;

"use strict";
var Hello = (function () {
    function Hello() {
        console.log("Hello, World!");
    }
    return Hello;
})();
var hello = new Hello();

The TypeScript compiler has taken your ECMAScript 6, and converted it to use the IFFE pattern, which is a pattern well supported in all browsers. By the way, the original class based code shown above is perfectly valid ES6 code. You can drop the code into a JS file and load it into your browser and it will work, but ES6 is not as widely supported as ES5 at this time.

TypeScript < 1.5 – Useful ES6 transformations

There are many new features in ECMAScript 6 (ES6) as described in this very good write-up by Luke Hoban on GitHub. I’ve narrowed it down for you to what I think are the most useful and common transformations that you can use right now.

Note: At the time of writing, not all ES6 features can be transpiled. Promises, for example, require browser support and cannot be transpiled to an ES5 equivalent. I don’t expect that trying to fudge in functionality into a browser will ever become a feature of TypeScript, this is something that is best left to a polyfill.

Template strings

Arguably the simplest transformation that TypeScript offers, template strings are simply a way of using variables as part of a string. Template strings use back-ticks “ to denote that a string contains variables.

Usage;

"use strict";
class Hello {
    constructor() {
        var hello = "Hello";
        var world = "World";

        console.log(`${hello}, ${world}!`);
    }
}
var hello = new Hello();

and the transpiled output;

"use strict";
var Hello = (function () {
    function Hello() {
        var hello = "Hello";
        var world = "World";
        console.log("" + hello + ", " + world + "!");
    }
    return Hello;
})();
var hello = new Hello();

At compile time, TypeScript replaces all template strings with simpler string concatenation (which has been around forever!). So you get the niceness of easier to read code without losing the cross browser support.

Personally, I didn’t exactly like this syntax at first, and at the time of writing some JavaScript linters get confused by the lack of spaces around variable names (Web Essentials, I’m looking at you!). But generally this syntax is clean and relatively easy to read.

Classes

We’ve touched on classes several times already at this point, and if you have done any object oriented programming at all there’s no doubt you have already stumbled across classes. Classes are simply containers, they contain information about the functionality the object (an instantiated class) such as methods, members etc. In TypeScript/JavaScript, classes are no different.

Usage;

"use strict";
class Hello {
    public id : number;
    private arbitraryValue: boolean;

    constructor() {
        this.id = 42;
        this.arbitraryValue = true;
        this.sayHello();
        this.saySomething("Goodbye, world!");
    }
    sayHello() : void {
        console.log("Hello, World!");
    }
    saySomething(message: string) :void {
        console.log(message);
    }
}
var hello = new Hello();

and the transpiled output;

"use strict";
var Hello = (function () {
    function Hello() {
        this.id = 42;
        this.arbitraryValue = true;
        this.sayHello();
        this.saySomething("Goodbye, world!");
    }
    Hello.prototype.sayHello = function () {
        console.log("Hello, World!");
    };
    Hello.prototype.saySomething = function (message) {
        console.log(message);
    };
    return Hello;
})();
var hello = new Hello();

You can use the following access modifiers to state the accessibility of your methods and variables;

  • public
  • protected
  • private

Note that these access modifiers are only used at compile time, and don’t affect the transpiled JavaScript.

Arrow functions

Also known as “Fat arrow functions”, because of the use of the equals operator (=>), are inline functions, similar to lambda expressions in C# and Java.

Usage;

"use strict";	
class Hello {
    constructor() {
        var sayHello = () => console.log("Hello, World!");
        var saySomething = (what : string) => console.log(what);

        sayHello();
        saySomething("Goodbye, world!");
    }
}	
var hello = new Hello();

and the transpiled output;

"use strict";
var Hello = (function () {
    function Hello() {
        var sayHello = function () { return console.log("Hello, World!"); };
        var saySomething = function (what) { return console.log(what); };
        sayHello();
        saySomething("Goodbye, world!");
    }
    return Hello;
})();
var hello = new Hello();

Arrow functions in TypeScript allow you to write cleaner, more reusable code without having a bunch of ugly inline functions staring at you.

TypeScript >= 1.5 – Useful ES6 transformations

TypeScript version 1.5 adds support for additional transformations (some of which are shown below). You have to have version 1.5+ installed to take advantage of these features.

“for…of” operator

The concept of a for...of loop is pretty simple. You have an array of objects, and you want to iterate through each item in the array. With a for...of loop you can also break and continue in the same way as you could with a standard for loop. A for...of loop, putting aside small differences in performance when dealing with large arrays (and having to increment a counter to keep the position in the array), is effectively syntactical sugar. And as such, a browser has to have native support for it.

TypeScript, however, transforms a for...of loop to a standard ES5 for loop;

Usage;

"use strict";
class Hello {
    constructor() {
        var a = ['a', 'b', 'c'];
        for (var n of a) {
        	console.log(n);
        }
    }
}
var hello = new Hello();

and the transpiled output;

"use strict";
var Hello = (function () {
    function Hello() {
        var a = ['a', 'b', 'c'];
        for (var _i = 0; _i < a.length; _i++) {
            var n = a[_i];
            console.log(n);
        }
    }
    return Hello;
})();
var hello = new Hello();

let

let in ES6 is a scope version of var. In a nutshell, var is function scoped and let scoped to the enclosing block. There are already lots of good write ups that describe the ins and outs of let vs scope, an especially good one can be found on the Mozilla Developer Network.

The following code, due to the way that closures work in ES5, is valid;

"use strict";
var Hello = (function () {
    function Hello() {
        var array = ['a', 'b', 'c', 'd'];
        for (var index = 0; index < array.length; index++) {
            var element = array[index];
            console.log(element);
        }
        index = 0;
    }
    return Hello;
})();
var hello = new Hello();

The index variable is scoped to the function, not the block. Changing var index to let index results in the index variable only being accessible inside the block.

The following code is invalid;

"use strict";
class Hello {
    constructor() {
	var array = ['a', 'b', 'c', 'd'];
	for (let index = 0; index < array.length; index++) {
		var element = array[index];
		console.log(element);
	}			
	index = 0;
    }
}
var hello = new Hello();

TypeScript allows you to use the let keyword and get all the compile time checking that comes with the feature, whilst maintaining support for older browsers by simply replacing all usages of let with var.

The code shown above transpiles to the following;

"use strict";
var Hello = (function () {
    function Hello() {
        var array = ['a', 'b', 'c', 'd'];
        for (var index = 0; index < array.length; index++) {
            var element = array[index];
            console.log(element);
        }
        index = 0;
    }
    return Hello;
})();
var hello = new Hello();

const

Constants in ES6 are the same concept as in most other programming languages. Traditionally you define a variable using the var keyword. Its value can be read and written at any time. Also, as we’re talking JavaScript (a dynamic language), the type can also be changed at runtime.

For example, the following code is perfectly valid JavaScript;

"use strict";
class Hello {
    constructor() {
	var a = "Hello!";
	console.log(a); //Writes 'Hello!'
	a = 123;
	console.log(a); //Writes 123
    }
}
var hello = new Hello();

A constant in ES6 allows you to set a value and know that value cannot be changed. Take the following code;

"use strict";
class Hello {
    constructor() {
        const a = "Hello!";
	console.log(a); //Writes 'Hello!'
	a = "World!";
	console.log(a);
    }
}
var hello = new Hello();

Running this code results in a runtime error (in Chrome and Firefox, which support the construct);

Uncaught TypeError: Assignment to constant variable.

As const is a native ES6 feature, the ES5 fallback is simply to use a var. This is the transformation TypeScript applies to your code;

"use strict";
var Hello = (function () {
    function Hello() {
        var a = "Hello!";
        console.log(a); //Writes 'Hello!'
        a = "World!";
        console.log(a);
    }
    return Hello;
})();
var hello = new Hello();

Enhanced object literals

More syntactic sugar in the ES6 standard, and this one is especially sweet. Instead of having to define your objects using key value pairs, you can now use a more concise syntax.

Classic ES5 code;

var firstName = "Jon";
var lastName = "Preece";
var person = {
    firstName: firstName,
    lastName: lastName,
    speak: function (what) {
        console.log(firstName + " " + lastName + " said '" + what + "'");
    }
};

You define a couple of variables/functions etc and create an object using keys for property names and values for the value of that property. Functions were also expressed using the function keyword. The enhanced object literal syntax in ES6 allows you to define keys/values in a single pass;

var firstName = "Jon";
var lastName = "Preece";
var person = {
  firstName,
  lastName,
  speak (what) {
    console.log(firstName + " " + lastName + " said '" + what +  "'");
  }
};

And as you might expect, TypeScript transforms this into the long form ES5 format shown above. (Class ES5 code).

Summary

TypeScript is a transpiler, not to be confused with a compiler. A transpiler takes your code and converts it into a similar format, typically the same language you are working in (in this case, JavaScript). A compiler takes your code and converts it into something completely different (think C# to IL for example).

TypeScript allows you to utilize new language features that are appearing in newer revisions of the ECMAScript standard (6 at the time of writing) and have them transpiled into a form that is widely supported across browsers (ES5 in this case).

Today you can take full advantage of Template String, Classes, Arrow functions, the ‘for…of’ loop, let + const, enhanced object literals, and more without having to worry if they will work in legacy browsers.

Getting started with TypeScript

This is the 101 tutorial which describes getting started with TypeScript using either the TypeScript Playground, Node.js or VS Code.

At its simplest, TypeScript is a programming language that provides optional static typing for JavaScript.  TypeScript is JavaScript.  Any valid JavaScript is valid TypeScript.  The beauty of TypeScript is that you can define types for your JavaScript variables and functions, and get compile time error checking and error reporting.  This tutorial focuses on getting started with TypeScript and demostrates the basics to get up and running quickly.

TypeScript Playground

The quickest, easiest way to get started with using TypeScript is to experiment with the TypeScript playground.  The TypeScript playground enables you to write TypeScript code in the browser, and see the resulting compiled JavaScript alongside.

First things first, TypeScript doesn’t try to force you to write code in a particular style.  In fact, you can write 100% pure JavaScript code in get the same code out at the other end.

Try entering the following code in the TypeScript pane on the left;

(function(){
	console.log("Hello, world!");	
})()

See the output?  Identical.  You can take advantage of TypeScript is much or as little as you please.

Refactor the code a little bit, introducing a log function as follows;

(function () {

	function log(message: string) {
		console.log(message);
	}

	log("Hello, World!");

})();

Click the “Run” button on the top-right hand side and press F12 to bring up the developer tools in your browser. Click on the “Console” tab and you should see the message “Hello, World!”.

Hello World!

What happened? Well, not a lot (or so you might think!).  Take a look at the compiled JavaScript code;

(function () {
    function log(message) {
        console.log(message);
    }
    log("Hello, World!");
})();

JavaScript is a dynamic language, it has no concept of types (that’s what TypeScript provides!).  TypeScript uses the type information to catch coding errors at compile time and provide other IDE functionality.  TypeScript generates 100% vanilla JavaScript code that is fully cross browser/cross operating system compatible.  By default, TypeScript will generate ECMAScript 5 (ES5) compliant code, although at the time of writing it is possible to generate ES3 and ES6 code too (and no doubt support will be added for ESx on a yearly basis).

Change the code as follows (changing the log message from a string to a number);

TypeScript Compile Time Behaviour

Three very interesting things have happened here, and this is the perfect demonstration of the general attitude of TypeScript.

Looking at the red arrows, in order from left to right

  1. You get compile time checking.  TypeScript recognizes that you have supplied a value to the log method that is not a string, and highlights the erroneous code to you.
  2. You get a detailed error message that explains in simple terms what the error was, and the type of value that the calling method was expecting (this is typical behaviour regardless of the complexity of code you are writing).
  3. The JavaScript code is generated regardless of these compile time errors.  TypeScript does not force you to adhere to its paradigm.

 

Node.js

You might be surprised to see Node mentioned on this page.  TypeScript is a Microsoft product right? Traditionally tools like this might have been constrained to Microsoft IDE’s or operating systems, but today’s modern Microsoft is moving away from that traditional stance and moving towards being more open.

TypeScript is not only completely free, open source, cross browser, cross operating system, but it is also community driven and actively accepts pull requests directly from community members.  In fact, the tooling is so good that its becoming widely adopted in many IDE’s including (but not limited to);

If you already have Node and Node Package Manager (npm) installed, open a Node.js command prompt and enter the following command to globally install TypeScript;

npm install -g typescript

This will install the TypeScript compiler onto your machine and into your PATH environment variable so you can call it directly.  Change directory to your desktop, and create a file called helloworld.ts.  Add the following code;

(function () {

	function log(message: string) {
		console.log(message);
	}

	log("Hello, World!");

})();

Now enter the following command;

tsc -w helloworld.ts

The watch flag (denoted by the -w) tells the TypeScript compiler to watch your file.  Meaning that, if you make some edits and save your changes, TypeScript will automatically recompile the file for you each time.

Open the helloworld.ts file in Notepad, make a small change, save the file.  You should notice the JS gets updated automatically.

TypeScript compilation completed

 

VS Code

VS Code, at the time of writing at least (they may or may not streamline this process in the future), requires a little more leg work to get TS files to compile (almost) automatically.  You can find a more comprehensive tutorial over on MSDN, but this is how you get up and running quickly;

  • Create a new folder on your desktop and create a new file called helloworld.ts (or use the one you created for the Node.js part of this tutorial).
  • Add the code shown above, named Log function with string type definition.
  • Open VS Code, click File > Open Folder… and point to the new folder you just created.
  • Add a new file called tsconfig.json, and add the following;
{
    "compilerOptions": {
    "target": "ES5"
    }
}

Press Ctrl+Shift+B on your keyboard.  This would normally kick off the task runner built into VS code.  However, we haven’t configured the task runner yet, so a small toolbar will appear at the top telling us there is no task runner configured.  There is a handy button on the right that says “Configure Task Runner”.  Click the button.

Configure VS Code Task Runner

VS Code will now generate a bunch of TypeScript specific configuration for us.  This will be covered in detail in a future post.  For now, however, just accept that TypeScript is ready to go.

Switch back to your helloworld.ts file, click Save and open the equivalent JavaScript file (helloworld.js).  You  should see the compiled output.  It can be helpful to put the two files side by side you that you can see the updated changes every time you click Save.

Side By Side View

Wait, there’s more!  TypeScript is a transpiler too…

A transpiler is a method of converting code from one language to another.  So what does this mean for us?

TypeScript allows us to utilize more modern language constructs, which will be transpiled into a more widely supported form.  The simplest example is string interpolation (also known as template strings), which is a feature of the ECMAScript 6 (ES6) standard.

Take the following ES6 code (1 – String Interpolation – template strings);

(function () {

	var hello = "Hello";
	var world = "World";
	
	var message = `${hello}, ${world}!`;

	console.log(message);

})();
(function () {
    var hello = "Hello";
    var world = "World";
    var message = hello + ", " + world + "!";
    console.log(message);
})();

Template strings are not supported in ES5, they are an ES6 feature only.  TypeScript knows this and automatically converts the code into a ES5 compliant form, the process of which is called Transpiling.  We will discuss this in more depth in future posts.

Summary

TypeScript is a free, open source, cross-browser, multi-OS tool from Microsoft that enables (but doesn’t force) static static typing.  TypeScript generates 100% vanilla JavaScript code that is fully cross browser/cross operating system compatible.  Tooling exists for a wide variety of editors/IDE’s include Node, Visual Studio, VS Code, Sublime Text and many more.  As an additional bonus, TypeScript is also a transpiler, meaning you can write code using modern ECMAScript 6 constructs and the resulting code will be ECMAScript 5 compliant.

Devs, this is how I got fit

Right now, I’m in pretty good shape. I have the confidence to say this because I’ve worked very hard over the last 18 months to get to this point. I’m not a muscular person, like what you might see in a designer underwear advert…and this was never my personal goal, but I certainly don’t feel embarrassed anymore when I take my shirt off. I’ve discovered a few very simple patterns for losing weight, gaining muscle (at a slow rate) and generally feeling better about myself. This has been such a big success for me, that I felt it only right to share with you.

I talk about myself a lot in this post, and I apologize for that, but it’s hard not to. I believe that if you follow this advice and these tips, you can achieve the same results as me.

Backstory

Poor diet, smoker, drinker, allergic to exercise guy

Growing up as a teenager in North-West England, UK, in the early 2000’s, I was skinny. I had an extremely poor diet and frankly was “allergic” (not literally, just mentally) to vegetables and healthy food in general. I flat out refused to eat vegetables because I simply didn’t like them. For tea I would have “chips and something”, usually chicken, beef, pork, typically skins on and deep fried.

My diet only got worse as I got older. I worked for a fast food chain for a while, and would regularly eat 3 meals on site up to 5 days a week (this particular fast food restaurant classifies you as a “heavy user” if you consume their food once a week).

When I became independent, i.e. I moved out of my parents house and got my own place, the trend continued for years.

As a software developer, a crazy obsessive one at that, I would spend all my time programming or playing on the games console (PS2/3, or XBox 360). Going outside wasn’t something I would do in a typical month. For a long time, I worked from home and would rarely venture outdoors. Basically I was very inactive, other than to empty the bin or to pick up the post (thank goodness for the elevator in our apartment block!).

At the time, I also smoked and drank alcohol heavily. My favourite tobacco was Golden Virginia (or Amber Leaf if money was a bit tight) and I was an avid whiskey drinker. Every Saturday night I would get so drunk that I would often end up being sick and doing much worse things than I care to share with you!

Any of this sound familiar?

The eureka moment

I think I’m relatively unique in that I didn’t have a eureka moment as such. Over a period of a few months I came to dislike my appearance. I would look at myself in the mirror, typically after a shower or when getting dressed, and I didn’t like what I saw. I wanted to do something about it, but I never made a serious commitment to actually do anything.

I had a couple of fads. I bought myself a press-up bar. Basically it was a small piece of equipment that helped you do push ups, and also doubled up as a pull-up bar that could easily be attached to a door, with no screws required. I would have a go a couple of times a week…hell I might even do 1 or 2 push-ups and a couple of pull-ups. But to be honest, I never once even broke a sweat.

For the last few years, I’ve been waking up early in the morning at the weekend. This is because my body behaves the opposite to most people. When I drink a lot of alcohol, my body temperature increases dramatically and as a result I can’t sleep. I get too hot for sleep, so I go downstairs and do something useful. I might write a blog post, watch a PluralSight course, write some code, or do something. As its typically around 3-4am, the place is very quite and I typically get a lot done, which is great, so this works well for me.

One day, I switched on the TV and one of those sucky TV shopping channels was on. I immediately reached for the remote control, but in the 2.5 seconds it took me to grab the it, I was instantly captivated on what they were advertising.

Insanity

Insanity is an exercise program for regular people, thin people, fat people, tall people, short people, men, women, fit people, people looking to lose weight or gain muscle or all of the above. This was a training programme for me that would get me on the road to good health and fitness. Insanity is a 9 week (60 day) workout program that requires no equipment, and can be done at home.

You work out 6 days a week, for a varying amount of time between 38 minutes and 60 minutes. The first week hurts like hell. After that, you get used to it but it never gets easier. In fact, the harder you push yourself, the more rewarding it becomes. Insanity is great for toning your body, improving your general health and fitness, and it gets you active.

The best part of Insanity? Its all you. No gym, no equipment, no public shame, no awkward showers. You do the workout in the comfort of your own home, and at a time that suits you.

Educating myself about food

Insanity had captured my interest and I instantly knew that I wanted to do it. Over the years I have gone from being the sort of person who makes snap decisions, takes risks, and generally doesn’t engage the brain before speaking, to a deep methodical thinker less inclined to take risks, but whom favours calculated risks.

If this was going to work, I had to give it 100% commitment. No excuses, no BS, no slip ups..for 9 solid weeks. I immediately set about educating myself about food.

Calorie Requirements

I had heard the term, like most people, “calories”. I knew that food contained calories. Healthier food contains less calories and unhealthy food contains more calories. Strictly speaking, this is not always true, but that is the basic idea. My body is a power station, and calories are its fuel. Understanding how many calories my body needs was the first challenge.

In the food guide that accompanied the Insanity workout programme was a useful formula for calculating your bodies calorie needs. The formula was based on the Harris Benedict equations. I typed up the formula into Microsoft Excel and added about 30%, as recommended by the guide based on the frequency of exercising I was planning on doing.

Using this very simple formula, I determined that I needed to consume about 3000 calories a day to lose weight and survive the workouts. To put that into perspective, the average male requires around 2500 calories a day to maintain their body-weight, and I would have to eat more than that!

I had already learnt my first lesson. It is possible to eat more food and still lose weight, as long as the food was fresh and healthy.. The first step was to throw away all the junk food in every cupboard in my kitchen. And that included sugar, soft drinks, and all the processed frozen food in my freezer.

Eating healthily

After determining how many calories my body needs to function correctly, and withstand regular exercise, I now had to understand what foods contained what calories. If I was going to consume 3000 calories a day, which is not a small task, I would need a solid plan and schedule for eating, cooking, and cleaning (cooking generates a lot of dishes!).

The key to losing weight is to eat more food. As many as 5 meals a day. This may seem like bad advice. After all, if you want to lose weight, then you should surely eat less food, right? Well no. The idea is that the more food you eat, the more energy your body uses to digest and process said food. If you eat 5 times a day (breakfast, morning snack, lunch, afternoon snack, evening dinner) then your body is going to be continuously processing food, and burning calories in the process. If you’re eating good, healthy, nutritional food which is low in fat and high in protein (and moderate in carbohydrates), combined with very regular, high intensity training, then you end up in caloric deficit (you have less calories left over than your body needs to maintain weight, therefore you lose weight).

How do you eat healthily?

Pro tip 1 – Plan all meals in advance

Before you deep dive into buying every fruit and vegetable on sale at the supermarket, you should take a step back and plan what you’re doing to eat. This is an alien concept to a lot of people today, but you can actually think ahead of time, list all the ingredients that make up each of your meals, and buy all those ingredients in a single trip to the supermarket each week. Only going shopping once a week is not only going to save you money (no impulse buying) but it will save you time (handy, because you’re going to need that for exercising) and reduce temptation/chances of falling off the wagon.

Sit down and research all your meals, list all the ingredients on a Microsoft Word document, and gather together all the recipes you need.

Here are 3 different shopping lists I have put together that you can use as an example. Note that these lists are catered to my personal needs and tastes, and you may need to adjust accordingly. They’re also not 100% accurate.

  • Current shopping list. This is the list I’m using right now (I’m doing daily fasting at the minute, which is why there are no breakfasts or snacks)
  • Insanity shopping list. Based on the Insanity healthy eating plan, with several customizations for my own tastes.
  • Focus T25 shopping list. Based on the Focus T25 healthy eating plan, again with my own twists and customizations.

As a very important note, you need some self discipline here. Whatever eating plan you use, you should stick to it 100% with no slip ups, snacks, nipples, nothing. Losing weight is a serious commitment and can easily be derailed by the odd treat.

On the other hand, I regularly schedule a treat meal for myself (perhaps once a week). This might be on a Friday night as a reward for the hard work done during the week, but it is always planned ahead of time and is never done on impulse.

Avoid all trans-fats (bad fats), cut all sugar out of your diet, excluding that found in fruit, and eat protein rich foods such as; chicken, beef and turkey. Nuts are super healthy and contain a lot of good fats that your body needs. My favourites include; walnuts, pecans, hazelnuts, cashews and some dried fruits with a twist of salt.

Note, I recommend changing up your meal plan every 4 weeks. There are two reasons for this. First is that eating the same food over and over gets boring. Second is that your body certainly seems to get used to the food your body consumes, and your weight loss my plateau or stop altogether. Shaking things up stops this from happening. You will also want to revisit the Harris Benedict equation from time to time to make sure you are consuming the correct amount of calories (this figure will likely decrease as your body weight decreases).

Pro tip 2 – Always cook fresh

I mentioned earlier that you will save a lot of time not going to the supermarket every day, because you’ve bought all your fresh food in a single trip to the supermarket each week.

I highly recommend that you cook all your meals fresh and just-in-time. This can be time consuming, but here are some tips to make it work;

  • Cook several meals at once. If you can, cook breakfast, lunch and dinner at the same time. I typically do this when cooking the dinner each evening. Stick breakfast and lunch in the fridge, and tuck in to dinner straight away.
  • Recruit a significant other to chip in with the dishes. Ask your other half, girlfriend, wife, (mother?), to give you a hand with the dishes. Have one person washing and one person drying. This will speed up the task dramatically.
  • Memorize each dish. You’ll get the cooking done much quicker if you know the timings, quantities and everything else without having to look at your chart/recipe list. However, be sure to have it on hand if needed.

Pro tip 3 – Eat or starve

If you are like how I used to be, i.e. “allergic” to any form of even remotely healthy food, you should use the Eat or starve technique. It’s pretty simple really, eat the food you have prepared, or go without. Don’t buy any “extras” from the supermarkets, and throw away or donate any unhealthy food you have lying around to somebody who needs it. After a couple of days, once your stomach is rumbling continuously, your taste buds will dull slightly and you’ll become more open to experimentation. Treating yourself when nobody is looking is going to do a lot more harm than good.

Another good tip, and I have done this myself several times, If you find it particularly difficult to eat a particular type of food (a salad for example) then go to a very public place and eat it there. Get together with a group of friends and order something healthy. You’re not going to embarrass yourself in front of your peers by not eating it, especially if everybody else is eating healthy too!

It’s also a good idea to take daily vitamin supplements, to be sure that your getting all the vitamins and minerals your body needs.

Exercising regularly

I’ve learnt a little secret that people generally don’t seem to know about. You don’t have to exercise to lose weight. In fact, if you are quite a bit overweight, I would recommend that you actually do no exercise at all for the first 2-3 weeks of your diet (or longer, if you’re quite a quite a bit overweight [not a typo!]). You’ll shed the pounds quickly to start off with, weather you exercise or not.

What is the purpose of exercising? Exercising is great for toning your body, building muscle, improving your core strength and generally helping you feel happier within your own skin. Exercise gives your body definition and shape.

Workout at home to start off with

Signing up for the gym is a huge commitment. Not only is it potentially expensive (the average gym membership fee is probably between £20-£50 a month), but it requires a lot of your time.

If you go to the gym, you have to;

  • Drive or walk to the gym
  • Say “Hi!” to the receptionist
  • Negotiate several layers of security, using various levels of security cards or keys
  • Find a safe corner in the locker room where you feel comfortable enough to change
  • Pluck up the courage to work out in front of other people, and occasionally have to speak to people (!) when you cross their paths
  • Negotiate the showers and the soap
  • Get dressed
  • Drive home, and sob about the whole experience.

As a beginner, this indeed can be a very trying and stressful process. And ideally, you would repeat this 3-4 or perhaps 5 times a week depending on your programme.

There is an alternative, my friend, workout at home;

  • Drive home (you were going there anyway)
  • Get changed in comfort of your own house
  • Work out in total privacy
  • Get showered, and dressed in private.
  • You’re already home! Time to relax.

There are many home based workout programmes out there. My favourite’s are Insanity: The Ultimate Cardio Workout and Fitness DVD Programme and Shaun T’s FOCUS T25 DVD Workout Programme, both are very popular. If you don’t like the look at either of these programmes, have a browse around the web and find something similar that will work for you. Start with Amazon, I’m sure there are hundreds!

Measure, measure, measure.

If you are going to embark upon an exercise and workout programme, I highly recommend that you set realistic targets, and track your progress every single day.

So as a ritual, when you wake up in the morning, do the following;

  • Go to the toilet, do all your business.
  • Weigh yourself, write it down.
  • Measure your chest, arms, belly, and thighs. Write it down.

This might seem obsessive at first, and perhaps it is a little, but measure your performance on a daily basis will help you maintain your focus (and actually becomes a little exciting after a while!).

Protein supplements

To finish the story

At the peak, I was approaching 14st (196lb). As somebody who is of a slender frame, this was very scary to me and according to my BMI, I was well in the “Overweight” category.

Today, I am a very healthy 11st (154lb) and I feel great. I am still eating healthily and I am doing the intermittent fasting eating regime, as discussed by James Clear on his blog. I exercise every single day, and hit the gym regularly. I now have good respect for food and I enjoy eating and preparing it. Turning my life around has made me a happier person and a better, more focused developer. You can achieve the same too. The trick is, it starts today.

TypeScript Tips and Tricks

Automatically compile TypeScript files when using VS Code

If you’re writing TypeScript using Visual Studio, your files are automatically compiled when you save (assuming you haven’t turned this off…the feature is found in the Project Properties > TypeScript Build screen).

If you don’t use Visual Studio, and instead are using a lightweight IDE such as VS Code or Sublime Text, you don’t get this feature.

Manual compilation

First things first, how would you normally compile a TypeScript file to JavaScript? VS Code doesn’t do this out of the box (perhaps the will add this functionality in the future, I’m not sure). You use a task runner.

To configure a task runner, open your project that has some TypeScript files, and press Ctrl+Shift+B. A little notification to appear that tells you that no task runner is configured. Click Configure Task Runner. VS Code will create a directory called .settings and add a new JSON file called tasks.json to this directory.

Open tasks.json and inspect the default configuration (the one that isn’t commented out, VS Code will show you some other sample configurations that are commented out. Look for the following line;

"args": ["HelloWorld.ts"]

Change this path to point at a TypeScript file in your project. Save changes.

Now open your TypeScript file, and open in it in side by side view. Put the TypeScript file (.ts) on the left, and put the compiled JavaScript code (.js) on the right.

Make a change your TypeScript file and press Ctrl+Shift+B again. You should see the updated JavaScript file.

Automatic compilation

Having to press Ctrl+Shift+B every time you want to compile your TypeScript files gets really old, really fast. Lets say to make a change, you refresh your browser and the change hasn’t been applied. You open the dev tools, start debugging, and hang on…where is your code? Oh right yeah, you forgot to run the task runner. Rinse and repeat.

Instead of using a task runner to call the TypeScript compiler (tsc.exe), we can instead make tsc work for us, using the -w flag, called the watch flag.

Wrong Way

My first thoughts when trying to get this to work were to pass the -w flag to tsc using VS Code. Try opening tasks.json and changing the args option as follows;

"args": ["-w", "test.ts"],

Yeah that doesn’t work (even though other sample configurations shown in the same file pass commands to tsc in this manner).

Right Way

The best way to do this is via the command line. Open a Node.js command prompt window, and change directory (cd) to your project folder. Now run the following command;

tsc -w

This will listen for changes in the given directory, and all sub directories. Whenever a change is made, the file will be recompiled automatically. So now, pressing Ctrl+S on the keyboard will cause the file to be recompiled.

We’re almost there. If you want the file to automatically compile pretty much as you type (not quite that frequently, but almost as good), you can enable Auto Save in VS Code. Click File > Auto Save to turn it on.

Success! All your TypeScript files will be automatically saved and compiled as your work on them, with no keyboard presses needed.

I mentioned Sublime text at the beginning of this tip, because of course this isn’t a VS Code specific feature. You can easily enable this regardless of the editor you are using.

Source maps

Source maps are are means of rebuilding compiled and minified code back to its original state. When you write TypeScript, it is transpiled to JavaScript and can be minified to reduce bandwidth costs and improve page load times. This process, however, makes debugging virtually impossible due to the fact that all variable names are changed, all white-space and comments are removed etc.

Browsers use source maps to translate this code back into its original state, enabling you to debug TypeScript code straight from the dev tools. Pretty neat huh?!

Modern browsers (IE10, Chrome, Firefox) enable source maps by default.

However, I have on many occasions encountered errors and inconsistencies when using source maps, and it is not just me who is encountering these issues. The dev tools might tell me, for example, the wrong value for this. TypeScript captures the value of this in a local variable, so that, in theory, you’re always using the right this (it typically captures the instance of the class itself). Often, however, dev tools will incorrectly tell me that this is an instance of window… rendering the debugger useless.

How to turn source maps off

There are a couple of ways to approach this.

Stop generating source maps

TypeScript is generating the source maps for you.

If you are using Visual Studio, you can stop generating source maps by going to Project Properties > TypeScript Build and un-checking Generate Source Maps. Be sure to rebuild your project.

For everybody else, you simply don’t pass in the –sourcemap argument to tsc.

Disable source maps in the browser

Modern browsers have the ability to disable source maps.

Chrome
1. Open dev tools
2. Settings
3. Sources (middle column)
4. Enable JavaScript source maps

Firefox
1. Open dev tools
2. Debugger
3. Settings
4. Un-tick “Show Original Sources”

Combine output in a single file

There are a bunch of tools available to take care of bundling and minification of JavaScript files. You’ve probably used either ASP .NET Bundling & Minification, Web Essentials bundler, Uglify or something similar. These tools generally work well and I’ve only experienced minor problems with each tool. (The ASP .NET bundler is a bit more troublesome than most, to be fair).

When using a task runner such as Grunt or Gulp, you pass in an array of all the file paths you want to include in the bundle.

Example;

files: {
        'dest/output.min.js': ['src/input.js']
      }

Generally I don’t take this approach of passing in individual files, I prefer to pass in a recursive path, perhaps something like this;

"**/*.ts"

That aside, if you prefer to include your files individually in Grunt/GulpFile, TypeScript can help you out by combining all the compiled JavaScript into a single file.

Using Visual Studio

If you’re using Visual Studio, there is a simple user interface to set this up. Right click your project and select Project Properties > TypeScript Build.

Under the Output header, there are two options of interest;

  1. Combine JavaScript output into file: – Name of the file to put the merged JavaScript.
  2. Redirect JavaScript output to directory: – The folder in which to put the merged file.

Typically you would use these two in conjunction with each other. You can then modify your Grunt/GulpFile to point at the merged file, rather than all your un-merged JavaScript files.

Via command prompt

The flags you need are as follows;

--out FILENAME
--outDir DIRECTORYPATH

Available from version 1.5

Version 1.5 of the TypeScript compiler (version 1.5.3 to be precise, use the -v flag to check you aren’t using 1.5.0 beta) adds a few additional features of some use (this is not an exhaustive list);

-m KIND or –module KIND

Ok this isn’t new, but TypeScript 1.5 has added support for UMD and System, so you can now pass the name through to use that module system. There is an updated UI in Visual Studio 2015 RTM for this feature.

–noEmitOnError

Doesn’t generate any JS files if any compile time errors are detected. You might want to turn this off if you want to ignore certain errors (like, incorrect or incomplete type declaration file).

–preserveConstEnums

The default behaviour in past versions of tsc was to remove enumerations and replace them with te actual value of the enumeration.

So if your enum was;

enum DaysOfWeek{
    Monday,
    Tuesday
    ... 
}

...

console.log(DayOfWeek.Monday)

The transpiled output would be;

console.log(0 /*Monday*/)

Passing the –preserveConstEnums flag prevents this transformation from taking place, so the original code stays in act.

Summary

The TypeScript compiler is flexible and configurable and has a wealth of flags that can be passed to it to change the transpiled output. Using Node.js tools, you can easily have your TS files automatically transpiled, you can have source maps for a better debugging experience, and you can have all the transpiled TS files merged into a single file for easier uglification.

I’ll update this post with new tips and tricks as I come across them. If you ave any good ones, let me know in the comments or get me on Twitter, and I’ll update the post accordingly.

If you’re interested in learning about all the different compiler flags that TypeScript accepts, you can find them in GitHub.