Showing posts with label ASP.NET. Show all posts
Showing posts with label ASP.NET. Show all posts

Saturday, 6 September 2014

Serializing and De-serializing JSON data Using ServiceStack

ServiceStack is a light-weight, complete and independent open source web framework for .NET. I recently started playing with it and I must say that it is an awesome framework. It has several nice features including .NET’s fastest JSON serializer.

Each piece in ServiceStack can be used independently. So is its piece for serialization. The serialization package of ServiceStack can be installed via NuGet using the following command:



The above package can be installed on any type of application. Let’s use the following Person class for creating object to serialize/de-serialize:

public class Person
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string City { get; set; }
    public string Occupation { get; set; }
}


Following is a sample object of the person class:

var person = new Person()
{
    Id = 1,
    Name = "Ravi",
    City = "Hyderabad",
    Occupation = "Software Engineer"
};


To serialize this object to a JSON string, we need to use the ServiceStack.Text.JsonSerializer class. Following statement serializes the above object:

var serialized = JsonSerializer.SerializeToString(person);


The above string can be de-serialized using the following statement:

var converted = JsonSerializer.DeserializeFromString<Person>(serialized);


The JsonSerializer class also has APIs to serialize into or de-serialize from TextWriter or Stream.

The APIs in ServiceStack are light-weight and easy to use. I am working on a series of articles on this great framework. Stay tuned for updates.

Happy coding!

Sunday, 8 June 2014

Expanding my Writing Zone

I started this blog as a novice blogger less than 2 years back and I had an amazing writing experience till now. I just can’t say enough about the accolades I received from the readers of this blog and also some constructive criticism that helped me in writing better. Thanks to each one of you that are reading this blog. I will continue writing good content here. In addition to writing for my blog, I started writing content for two of leading content publishers on the internet.

DotNet Curry Magazine:
DotNet Curry Magazine (DNC Magazine) is a free magazine for .NET developers around the world. It is started and ran by a set of technology experts including Suprotim Agarwal, a Microsoft MVP for ASP.NET/IIS. Suprotim is the Editor-in-chief for the magazine. The magazine releases an issue on every alternate month with high quality content on latest Microsoft Technologies. I am one of the thousands of subscribers to the magazine and I highly recommend subscribing to the magazine. The authors are MVPs and experts around the world. I am fortunate to have joined the team of authors. My first article for DNC was published during the May 2014 edition of the magazine and the article is now available on their website too, you can check it here. Stay tuned for lots of .NET content on DNC Magazine from all the authors.


Check my author page on DotNetCurry site: http://www.dotnetcurry.com/author.aspx?AuthorName=Ravi%20Kiran

Site Point:
Sitepoint is a very well-known site for many developers as a source of knowledge on several topics including HTML, CSS, JavaScript, Mobile, UX, Design and other topics. They publish articles, books, courses and also have forums for Q&A. I read several of their articles focussed on front-end web technologies. I contacted Sitepoint to know I could write for them and the editor-in-chief Ophelie Lechat accepted me as an author for their site. As most of you would have already guessed, I will be posting articles focused on JavaScript for Sitepoint. My first article for Sitepoint is published, you can read it here. Also check articles by other authors, they will help you for sure. I have a nice set of articles planned to be published for Sitepoint. Follow their twitter feed for tweets on their articles and also for technology news.

Check my author page on SitePoint: http://www.sitepoint.com/author/rkiran/

I was busy in initiating my work for these writing assignments and writing articles. So, I couldn't get enough time to blog. But, I have a nice series planned for this blog and you will see the posts popping up soon.

Happy coding!

Sunday, 9 March 2014

A Closer Look at the Identity Client Implementation in SPA Template

Sometime back, we took a look at the identity system in ASP.NET and its support for both local and remote accounts. In this post, we will take a closer look at the way it is used in the Single Page Application template that comes with Visual Studio 2013.

The SPA application is a JavaScript client that uses an authenticated Web API. The Web API exposes a number of API methods to manage users. The JavaScript code uses jQuery for AJAX and Knockout for data binding has a number of methods that accomplish the task of managing the user. Let’s take a look at how the local users are authenticated.

Identity Client Implementation for Local Users:
If you check the Web API’s authentication configuration class, it exposes an endpoint at /Token for serving authorization token. Login credentials of the user have to be posted to this URL to get the user logged in. In response, the endpoint sends some data of which access token is a component. The access token has to be saved and used for subsequent requests to indicate the session of the logged in user.

Following code in the app.datamodel.js file sends a post request to the ‘/Token’ endpoint to login the user:


self.login = function (data) {
    return $.ajax(loginUrl, {
        type: "POST",
        data: data
    });
};

Following is the snippet from login.viewmodel.js that calls the above method:
dataModel.login({
    grant_type: "password",
    username: self.userName(),
    password: self.password()
}).done(function (data) {
    self.loggingIn(false);

    if (data.userName && data.access_token) {
        app.navigateToLoggedIn(data.userName, data.access_token, self.rememberMe());
    } else {
        self.errors.push("An unknown error occurred.");
    }
});


There are a couple of things to be noticed in the above code:

  • The property grant_type has to be set in the data passed to the login method
  • The navigateToLoggedIn method stores the access token in browser’s local storage to make it available for later use

To sign up for a new user, we just need to post the user’s data to the endpoint at ”/api/Account/Register”. Once the user is successfully registered, it sends a login request to the token endpoint discussed above to login the user immediately. Code for this can be found in the register method of register.viewmodel.js file. Following is the snippet that deals with registering and logging in:


dataModel.register({
    userName: self.userName(),
    password: self.password(),
    confirmPassword: self.confirmPassword()
}).done(function (data) {
    dataModel.login({
        grant_type: "password",
        username: self.userName(),
        password: self.password()
    })
    ......


Identity Client Implementation for External Login Users:
Identity implementation for external accounts is a bit tricky; because it involves moving the scope to a public web site and a number of API calls. When the application loads, it gets the list of external login providers by sending an HTTP GET request to “/api/Account/ExternalLogins”. Following method in app.datamodel.js does this:


self.getExternalLogins = function (returnUrl, generateState) {
    return $.ajax(externalLoginsUrl(returnUrl, generateState), {
        cache: false,
        headers: getSecurityHeaders()
    });
};

The parameter returnUrl used in the above method is where the user would be redirected after logging in from the external site.

Based on the providers obtained from the server, the provider specific login buttons are shown on the page. When a user chooses to use an external login provider, the system redirects him to the login page of the chosen provider. Before redirecting, the script saves the state and the URL redirected in session storage and local storage (both are used to make it work on most of the browsers). This information will be used once the user comes back to the site after logging in. Following snippet in login.viewmodel.js does this:


self.login = function () {
    sessionStorage["state"] = data.state;
    sessionStorage["loginUrl"] = data.url;
    // IE doesn't reliably persist sessionStorage when navigating to another URL. Move sessionStorage temporarily
    // to localStorage to work around this problem.
    app.archiveSessionStorageToLocalStorage();
    window.location = data.url;
};


Once the user is successfully authenticated by the external provider, the external provider redirects the user to the return URL with an authorization token in the query string. This token is unique for every user and the provider. This token is used to check if the user has already been registered to the site. If not, he would be asked to do so. Take a look at the following snippet from the initialize method in app.viewmodel.js:
......       
 } else if (typeof (fragment.access_token) !== "undefined") {
    cleanUpLocation();
    dataModel.getUserInfo(fragment.access_token)
    .done(function (data) {
    if (typeof (data.userName) !== "undefined" && typeof (data.hasRegistered) !== "undefined"
            && typeof (data.loginProvider) !== "undefined") {
        if (data.hasRegistered) {
            self.navigateToLoggedIn(data.userName, fragment.access_token, false);
        }
        else if (typeof (sessionStorage["loginUrl"]) !== "undefined") {
            loginUrl = sessionStorage["loginUrl"];
            sessionStorage.removeItem("loginUrl");
            self.navigateToRegisterExternal(data.userName, data.loginProvider, fragment.access_token,                              loginUrl, fragment.state);
        }
        else {
            self.navigateToLogin();
        }
    } else {
        self.navigateToLogin();
    }
})
......


It does the following:

  • Clears the access token from the URL
  • Queries the endpoint ‘/api/UserInfo’ with the access token for information about the user
  • If the user is found in the database, it navigates to the authorized content
  • Otherwise, navigates to the external user registration screen, where it asks for a local name

The external register page accepts a name of user’s choice and posts it to ‘/api/RegisterExternal’. After saving the user data, the system redirects the user to login page with value of state set in the session and local storages and also sends the authorization token with the URL. The login page uses this authorization token to identify the user.


dataModel.registerExternal(self.externalAccessToken, {
        userName: self.userName()
    }).done(function (data) {
        sessionStorage["state"] = self.state;
        // IE doesn't reliably persist sessionStorage when navigating to another URL. Move sessionStorage
        // temporarily to localStorage to work around this problem.
        app.archiveSessionStorageToLocalStorage();
        window.location = self.loginUrl;
    })


For logout, an HTTP POST request is sent to ‘/api/Logout’. It revokes the claims identity that was associated with the user. The client code removes entry of the access token stored in the local storage.

Happy coding!

Monday, 17 February 2014

Consuming ASP.NET Web API OData Batch Update From JavaScript

Consuming OData with plain JavaScript is a bit painful, as we would require handling some of the low-level conversions. datajs is a JavaScript library that simplifies this task.

datajs converts data of any format that it receives from the server to an easily readable format. The batch request sent is a POST request to the path /odata/$batch, which is made available if batch update option is enabled in the OData route configuration.

As seen in the last post, a batch update request bundles of a number of create put and patch requests. These operations have to be specified in an object literal. Following snippet demonstrates it with a create, an update and a patch request:


var requestData = {
    __batchRequests: [{
        __changeRequests: [
            {
                requestUri: "/odata/Customers(1)",
                method: "PATCH",
                data: {
                    Name: "S Ravi Kiran"
                }
            },
            {
                requestUri: "/odata/Customers(2)"
                data: {
                    Name: "Alex Moore",
                    Department: "Marketing",
                    City:"Atlanta"
                }
            },
            {
                requestUri: "/odata/Customers",
                method: "POST",
                data: {
                    Name: "Henry",
                    Department: "IT",
                    City: "Las Vegas"
                }
            }
        ]
    }]
};

Following snippet posts the above data to the /odata/$batch endpoint and then extracts the status of response of each request:

OData.request({
    requestUri: "/odata/$batch",
    method: "POST",
    data: requestData,
    headers: { "Accept": "application/atom+xml"}
}, function (data) {
    for (var i = 0; i < data.__batchResponses.length; i++) {
        var batchResponse = data.__batchResponses[i];
        for (var j = 0; j < batchResponse.__changeResponses.length; j++) {
            var changeResponse = batchResponse.__changeResponses[j];
        }
    }
    alert(window.JSON.stringify(data));
}, function (error) {
    alert(error.message);
}, OData.batchHandler);


Happy coding!

Sunday, 16 February 2014

Batch Update Support in ASP.NET Web API OData

The ASP.NET Web API 2 OData includes some new features including the support for batch update. This features allows us to send a single request to the OData endpoint with a bunch of changes made to the entities and ask the service to persist them in one go instead of sending individual request for each change made by the user. This reduces the number of round-trips between the client and the service.

To enable this option, we need to pass an additional parameter to the MapODataRoute method along with other routing details we discussed in an older post. Following statement shows this:


GlobalConfiguration.Configuration.Routes.MapODataRoute("ODataRoute", "odata", model,new DefaultODataBatchHandler(GlobalConfiguration.DefaultServer));


I have a CustomerController that extends EntitySetController and exposes OData API to perform CRUD and patch operations on the entity Customer, which is stored in a SQL Server database and accessed using Entity Framework. Following is code of the controller:

public class CustomersController : EntitySetController<Customer, int>
{
    CSContactEntities context;

    public CustomersController()
    {
        context = new CSContactEntities();
    }

    [Queryable]
    public override IQueryable<Customer> Get()
    {
        return context.Customers.AsQueryable();
    }

    protected override int GetKey(Customer entity)
    {
        return entity.Id;
    }

    protected override Customer GetEntityByKey(int key)
    {
        return context.Customers.FirstOrDefault(c => c.Id == key);
    }

    protected override Customer CreateEntity(Customer entity)
    {
        try
        {
            context.Customers.Add(entity);
            context.SaveChanges();
        }
        catch (Exception)
        {
            throw new InvalidOperationException("Something went wrong");
        }

        return entity;
    }

    protected override Customer UpdateEntity(int key, Customer update)
    {
        try
        {
            update.Id = key;
            context.Customers.Attach(update);
            context.Entry(update).State = System.Data.Entity.EntityState.Modified;

            context.SaveChanges();
        }
        catch (Exception)
        {
            throw new InvalidOperationException("Something went wrong");
        }

        return update;
    }

    protected override Customer PatchEntity(int key, Delta<Customer> patch)
    {
        try
        {
            var customer = context.Customers.FirstOrDefault(c => c.Id == key);

            if (customer == null)
                throw new InvalidOperationException(string.Format("Customer with ID {0} doesn't exist", key));

            patch.Patch(customer);
            context.SaveChanges();
            return customer;
        }
        catch (InvalidOperationException ex)
        {
            throw ex;
        }
        catch (Exception)
        {
            throw new Exception("Something went wrong");
        }
    }

    public override void Delete(int key)
    {
        try
        {
            var customer = context.Customers.FirstOrDefault(c => c.Id == key);

            if (customer == null)
                throw new InvalidOperationException(string.Format("Customer with ID {0} doesn't exist", key));

            context.Customers.Remove(customer);
            context.SaveChanges();
        }
        catch (InvalidOperationException ex)
        {
            throw ex;
        }
        catch (Exception)
        {
            throw new Exception("Something went wrong");
        }
    }
}


Let’s use the batch update feature in a .NET client application. Let’s try adding a new customer and update an existing customer in the Customers table. Following code does this:

Container container = new Container(new Uri("http://localhost:<port-no>/odata"));

var firstCustomer = container.Customers.Where(c => c.Id == 1).First();
firstCustomer.Name = "Ravi Kiran";
container.UpdateObject(firstCustomer);

var newCustomer = new Customer() { Name="Harini", City="Hyderabad", Department="IT"};
container.AddToCustomers(newCustomer);

var resp = container.SaveChanges(SaveChangesOptions.Batch);


The last statement in the above snippet sends a batch request containing two requests: one to add a new customer and one to update an existing customer. The update operation calls the patch operation exposed by the OData API.

In next post, we will see how to perform batch update in a JavaScript client.

Happy coding!

Tuesday, 7 January 2014

Using Dependency Injection with ASP.NET Web API Hosted on Katana

Using Katana we can build a light-weight server to host an ASP.NET application, without needing IIS or System.Web. Katana has a nice support for hosting SignalR and Web API with very few lines of code. In a previous post, we saw how to self-host SignalR using Katana. In this post, we will see how easy it is to perform dependency injection on a self-hosted Katana application.

Open Visual Studio 2013 and create a new console application. Add following NuGet packages to the application:


  • Install-Package Microsoft.AspNet.WebApi.Owin
  • Install-Package Ninject


First package gets a bunch of packages for creating and hosting Web API. The second package installs the Ninject IoC container.

Add an API Controller to the application and change the code of the controller to:

public class ProductsController : ApiController
{
    IProductsRepository repository;

    public ProductsController(IProductsRepository _repository)
    {
        repository = _repository;
    }

    public IEnumerable<Product> Get() 
    {
        return repository.GetAllProducts();
    }
}

We need to inject instance of a concrete implementation of IProductRepository. We will do this using dependency injection. You can create the repository interface and an implementation class by your own.

To perform dependency injection with Web API, the WebApiContrib project has a set of NuGet packages, including one for Ninject. But the version of Web API it requires is 4.0.30319. As we are using VS 2013, the default version of Web API is 5.0.0.0. Installing this NuGet package may result into version compatibility issues. To avoid this issue, we can build the WebapiContrib.Ioc.Ninject project using VS 2013 or we can also copy the code of NinjectResolver class from the GitHub repo and add it to the console application.

In the Owin Startup class, we need to configure Web API and also tie it with an instance of NinjectResolver. Constructor of NinjectResolver accepts an argument of IKernel type. Kernel is the object where we define the mapping between requested types and types to be returned. Let’s create a class with a static method; this method will create the kernel for us. Add a class and name it NinjectConfig. Change code of the class as:

public static class NinjectConfig
{
    public static IKernel CreateKernel()
    {
        var kernel = new StandardKernel();

        try
        {
            kernel.Bind<IProductsRepository>().To<ProductRepository>();
            return kernel;
        }
        catch (Exception)
        {
            throw;
        }
    }
}

Now all we need to do is, configure Web API and start the server. Add a class, change the name as Startup and replace the code as:
public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        var config = new HttpConfiguration();
        config.DependencyResolver = new NinjectResolver(NinjectConfig.CreateKernel());

        config.Routes.MapHttpRoute("default", "api/{controller}/{id}", new { id=RouteParameter.Optional });

        app.UseWebApi(config);
    }
}


Finally, start the server in Main method:
static void Main(string[] args)
{
    string uri = "http://localhost:8080/";

    using (WebApp.Start<Startup>(uri))
    {
        Console.WriteLine("Server started...");
        Console.ReadKey();
        Console.WriteLine("Server stopped!");
    }
}


Open a browser and enter the following URL:


http://localhost:8080/api/products


You should be able to see list of products returned from the repository class.

Happy coding!

Monday, 30 December 2013

Self-Hosting ASP.NET SignalR Using Katana

With the recent release of ASP.NET, Microsoft added a new component, Katana. It is an implementation of OWIN (Open Web Interfaces for .NET) that provides a light weight alternative to create a .NET server quickly without needing any of the ASP.NET components or IIS. The server can be composed anywhere, even on a simple Console based application. For more information on Katana, read the excellent introductory article by Howard Dierking.

The project templates of ASP.NET that come with Visual Studio 2013 have some pieces of Katana also installed, as we saw some of them in my post on Identity system. It is used there to make the identity system available for across all flavours of ASP.NET.

Katana made the process of self-hosting ASP.NET Web API and SignalR very easier with its simple interface. Any Katana based application needs a startup class to kick the things off. We can perform any global configuration in this class. It includes hub route mapping in SignalR or API controller routing in Web API.

Let’s build a simple console-hosted SignalR application using Katana. Create a console application using Visual Studio 2013 and install the following NuGet package on this project:


Install-Package Microsoft.AspNet.SignalR.SelfHost


This package installs all the dependencies required to write and publish a SignalR application on a non-web based environment. Add a new class to the project; name it Startup and add the following code to it:

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        app.MapSignalR();
    }
}

The configuration method is used comparable to Application_Start event in Global.asax, it gets called at the beginning of the application. As we see, SignalR routes are mapped in the configuration method. We need to start the server in the Main method of the Program class. Add following code to the Main method:
static void Main(string[] args)
{
    string uri = "http://localhost:8080/";

    using (WebApp.Start<Startup>(uri))
    {
        Console.WriteLine("SignalR server started, start sending requests");
        Console.ReadKey();
        Console.WriteLine("Server stopped!");
    }
}

As there is no web server to host our application, we need to specify a port number where the server has to keep running. The generic static method WebApp.Start is called to start the server; it calls the Configuration method defined above. The server would stop once the console application stops running.

Let’s create a boring hello-world kind of hub. Following is a hub that takes name of a person and returns a greeting message with current time stamp:

public class HelloWorldHub : Hub
{
    public void Greet(string name) 
    {
        Console.WriteLine("Got a request from: {0}", name);
        string message= string.Format("Hello, {0}. Greeted at: {1}",name,DateTime.Now.ToString());
        Clients.All.acceptGreeting(message);
    }
}

Run the console application, you should be able to see the following message:

Let’s quickly create a console client to test if the server is able to push messages to the client. Create another console application and add the following NuGet package to it:


Install-Package Microsoft.AspNet.SignalR.Client


Add following code to the Main method:

static void Main(string[] args)
{
    var hubConnection = new HubConnection("http://localhost:8080/");
    var hubProxy = hubConnection.CreateHubProxy("HelloWorldHub");
    hubProxy.On<string>("acceptGreeting", message => {
        Console.WriteLine("Received from server: " + message);
        Console.WriteLine("\n\nPress any key to exit.");
    });

    Console.WriteLine("Establishing connection...");
    hubConnection.Start().Wait();
    Console.WriteLine("Sending request to hub...");
    hubProxy.Invoke("Greet", "Ravi").Wait();

    Console.ReadLine();
    hubConnection.Stop();
}

Run the client application, you should be able to see the following output:



Happy coding!

Wednesday, 18 December 2013

Unit Testing Asynchronous Web API Action Methods Using MS Test

Since Entity Framework now has a very nice support of performing all its actions asynchronously, the methods in the repositories in our projects will turn into asynchronous methods soon and so will be the code depending on it. Tom Fitzmacken did a nice job by putting together a tutorial on unit testing Web API 2 Controllers on official ASP.NET site. The tutorial discusses on testing synchronous action methods. The same techniques can be applied to test asynchronous action actions as well. In this post, we will see how easy it is to test asynchronous Web API action methods using MS Test.

I created a simple repository interface with just one method in it. The implementation class uses Entity Framework to get a list of contacts from the database.

public interface IRepository
{
    Task<IEnumerable<Contact>> GetAllContactsAsync();
}

public class Repository : IRepository
{
    ContactsContext context = new ContactsContext();

    public async Task<IEnumerable<Contact>> GetAllContactsAsync()
    {
        return await context.Contacts.ToArrayAsync();
    }
}

Following is the ASP.NET Web API controller that uses the above repository:
public class ContactsController : ApiController
{
    IRepository repository;

    public ContactsController() : this(new Repository())
    { }

    public ContactsController(IRepository _repository)
    {
        repository = _repository;
    }

    [Route("api/contacts/plain")]
    public async Task<IEnumerable<Contact>> GetContactsListAsync()
    {
        IEnumerable<Contact> contacts;
         try
         {
            contacts = await repository.GetAllContactsAsync();
         }
         catch (Exception)
         {
             throw;
         }
           
         return contacts;
    }

    [Route("api/contacts/httpresult")]
    public async Task<IHttpActionResult> GetContactsHttpActionResultAsync()
    {
        IEnumerable<Contact> contacts;

        try
        {
            contacts = await repository.GetAllContactsAsync();
        }
        catch (Exception ex)
        {
            return InternalServerError(ex);
        }
        
        return Ok(contacts);
    }
}

As we see, the controller has two action methods performing the same task, but  the way they return the results is different. Since both of the action methods respond to HTTP GET method, I used attribute routing to distinguish them. I used poor man’s dependency injection to instantiate the repository; it can be easily replaced using an IoC container.

Before writing unit tests for the above action methods, we need to create a mock repository.

public class MockRepository:IRepository
{
    List<Contact> contacts;

    public bool FailGet { get; set; }

    public MockRepository()
    {
        contacts = new List<Contact>() {
            new Contact(){Id=1, Title="Title1", PhoneNumber="1992637281", CustomerId=1},
            new Contact(){Id=2, Title="Title2", PhoneNumber="9172735171", SupplierId=2},
            new Contact(){Id=3, Title="Title3", PhoneNumber="8361910353", CustomerId=2},
            new Contact(){Id=4, Title="Title4", PhoneNumber="7801274518", SupplierId=3}
        };
    }

    public async Task<IEnumerable<Contact>> GetAllContactsAsync()
    {
        if (FailGet)
        {
            throw new InvalidOperationException();
        }
        await Task.Delay(1000);
        return contacts;
    }
}

The property FailGet in the above class is used to force the mock to throw an exception. This is done just to cover more test cases.

In the test class, we need a TestInitialize method to arrange the objects needed for unit testing.

[TestClass]
public class ContactsControllerTests
{
    MockRepository repository;
    ContactsController contactsApi;

    [TestInitialize]
    public void InitializeForTests()
    {
        repository = new MockRepository();
        contactsApi = new ContactsController(repository);
    }
}

Let us test the GetContactsListAsync method first. Testing this method seems to be straight forward, as it returns either a plain generic list or throws an exception. But the test method can’t just return void like other tests, as the method is asynchronous. To test an asynchronous method, the test method should also be made asynchronous and return a Task. Following test checks if the controller action returns a collection of length 4:
[TestMethod]
public async Task GetContacts_Should_Return_List_Of_Contacts() 
{
    var contacts = await contactsApi.GetContactsListAsync();
    Assert.AreEqual(contacts.Count(), 4);
}

If the repository encounters an exception, the exception is re-thrown from the GetContactsListAsync method as well. This case can be checked using the ExpectedException attribute.
[TestMethod]
[ExpectedException(typeof(InvalidOperationException))]
public async Task GetContacts_Should_Throw_Exception()
{
    repository.FailGet = true;
    var contacts = await contactsApi.GetContactsListAsync();
}

Now let’s test the GetContactsHttpActionResultAsync method. Though this method does the same thing as the previous method, it doesn’t return the plain .NET objects. To test this method, we need to extract the result from the IHttpActionResult object obtained from the action method. Following test checks if the action result contains a collection when the repository is able to fetch results. Return type of Ok() method used above is OkNegotiatedContentResult. IHttpActionresult has to be converted to this type to check for the result obtained:
[TestMethod]
public async Task GetContactsHttpActionResult_Should_Return_HttpResult_With_Contacts()
{
    var contactsResult = await contactsApi.GetContactsHttpActionResultAsync() as OkNegotiatedContentResult<IEnumerable<Contact>>;

    Assert.AreEqual(contactsResult.Content.Count(), 4);
}

Similarly, in case of error, we are calling InternalServerError() method to return the exception for us. We need to convert the result to ExceptionResult type to be able to check the type of exception thrown. It is shown below:
[TestMethod]
public async Task GetContactsHttpActionResult_Should_Return_HttpResult_With_Exception()
{
    repository.FailGet = true;
    var contactsResult = await contactsApi.GetContactsHttpActionResultAsync() as ExceptionResult;
    Assert.IsInstanceOfType(contactsResult.Exception,typeof(InvalidOperationException));
}

Happy coding!

Tuesday, 3 December 2013

Using External Login Providers with ASP.NET Identity System

In last post, we saw how simple the new Identity system in ASP.NET is and we explored the code generated by the Visual Studio template for handling local user accounts. In this post, we will see how easy it is to use an external login provider with the identity system.

Allowing users to use an external provider to login to a website has several advantages. It includes:

  1. Your application doesn’t need to store user names and passwords of the users using external login
  2. Almost all of the external login providers have secured logging page, using HTTPS
  3. The user doesn’t need to register to your site and so he/she doesn’t need to remember another set of user name and password


The Startup.Auth.cs file contains a block of commented code for external login providers. The providers include Google, Microsoft, Twitter and Facebook.

// Uncomment the following lines to enable logging in with third party login providers
//app.UseMicrosoftAccountAuthentication(
//    clientId: "",
//    clientSecret: "");

//app.UseTwitterAuthentication(
//   consumerKey: "",
//   consumerSecret: "");

//app.UseFacebookAuthentication(
//   appId: "",
//   appSecret: "");

//app.UseGoogleAuthentication();

Out of these four providers, Google is the easiest provider to configure and use. It is as easy as just uncommenting the last statement in the above commented snippet. For other providers, we need to visit developer help websites of the third parties and input details of the application to get the keys. Pranav Rastogi documented these steps for us; they are available on Web Development Tools Blog on MSDN.

I enabled Google and Twitter in my application. The login page shows up buttons to use these providers.

After logging in using one of the external services, the application would ask the user to set a local user name. The user can optionally also set a local password for the account. Once the login at the external provider is successful, the user would be directed to ExternalLoginCallback. Following is the code inside this action method:
public async Task<ActionResult> ExternalLoginCallback(string returnUrl)
{         
    var loginInfo = await AuthenticationManager.GetExternalLoginInfoAsync();
    if (loginInfo == null)
    {
        return RedirectToAction("Login");
    }

    // Sign in the user with this external login provider if the user already has a login
    var user = await UserManager.FindAsync(loginInfo.Login);
    if (user != null)
    {
        await SignInAsync(user, isPersistent: false);
        return RedirectToLocal(returnUrl);
    }
    else
    {
        // If the user does not have an account, then prompt the user to create an account
        ViewBag.ReturnUrl = returnUrl;
        ViewBag.LoginProvider = loginInfo.Login.LoginProvider;
        return View("ExternalLoginConfirmation", new ExternalLoginConfirmationViewModel { UserName = loginInfo.DefaultUserName });
    }
}
It does the following:

  1. Checks if the external login is successful. If the login has failed, the user is directed to the login page
  2. Once the login is successful, it checks if the logged in user already has a local username in the application. If the user doesn’t have one, the application prompts the user to create one
  3. If the user already has an account, the application directs the user to the return URL

To identify the user when he/she comes back and logs in using the same third party account, the identity system maintains details in the following two tables:

  1. AspNetUsers: It is the same table that holds details of local users. By default, external user IDs would be created with empty passwords.
  2. AspNetUserLogins: Holds user ID, name of the external provider and a provider key

The code of creating these records can be found in the ExternalLoginConfirmation action method of the AccountController. Following is the snippet that adds the records:

var info = await AuthenticationManager.GetExternalLoginInfoAsync();
if (info == null)
{
    return View("ExternalLoginFailure");
}
var user = new ApplicationUser() { UserName = model.UserName };
var result = await UserManager.CreateAsync(user);
if (result.Succeeded)
{
    result = await UserManager.AddLoginAsync(user.Id, info.Login);
    if (result.Succeeded)
    {
        await SignInAsync(user, isPersistent: false);
        return RedirectToLocal(returnUrl);
    }
}

The AuthenticationManager.GetExternalLoginInfoAsync called above is a generic method and it provides just the data needed to identify the user. The returned value contains default user name, name of the third party provider and an authentication ID. It doesn’t provide vendor specific details about a user. To get more details from the vendor, we need to use the AuthenticateAsync method.
var vendorResult = await AuthenticationManager.AuthenticateAsync(DefaultAuthenticationTypes.ExternalCookie);

Following screenshot shows the details obtained after logging in using twitter. If you inspect the value returned from Google, you will find e-mail ID of the user in the details, you may use it as the default username for the application as well.

Happy coding!

Saturday, 30 November 2013

A Look at the new Identity System in ASP.NET

One of the key features added to ASP.NET core with the release of Visual Studio 2013 is the new Identity System. If you create a new ASP.NET project, may it be a Web Forms or an MVC project, you will find the default authentication type selected as "Individual User Accounts".



The individual user authentication type creates authentication system based on the Identity system, which looks greatly simplified when compared with the Membership system we had in earlier version. The default project template includes the following references (also available via NuGet packages with same names):

  • Microsoft.AspNet.Identity.Core: Consists of the core classes and interfaces for identity system
  • Microsoft.AspNet.Identity.EntityFramework: Consists of classes implemented for Identity system using ADO.NET Entity Framework

The default templates make use of Entity Framework to persist the user’s information in a SQL Server database. The default database used is an MDF file.

Classes and Setup:
The project template includes Owin and Katana. In the authorization configuration, we see the following setings applied to setup cookie based authentication system on Owin:

app.UseCookieAuthentication(new CookieAuthenticationOptions
{
    AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
    LoginPath = new PathString("/Account/Login")
});
app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie);

It makes the authentication system sharable across Web forms, MVC, Web API and SignalR without adding any additional code. The authentication configuration is called from Owin start-up class. In the models folder of the project, a file with following code is added for us:
public class ApplicationUser : IdentityUser
{
}

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
    public ApplicationDbContext()
        : base("DefaultConnection")
    {
    }
}

As we see, this file includes two classes:

  • ApplicationUser: Inherited from Microsoft.AspNet.Identity.EntityFramework.IdentityUser. The IdentityUser class is inherited from the interface IUser. Any user class in the identity system must inherit from this interface. The class IdentityUser includes all necessary properties needed for a user account, like UserName, PasswordHash, SecurityStamp. This class allows us to add our own properties to a user’s profile. For example, if your web site needs to capture e-mail ID and phone number of the user, you can add them to this page. The IdentityUser class also includes navigation properties for UserRoles, UserClaims and UserLogins. The UserLogin entity is used to store information when a user is logged in using an external authentication provider, like Google, Microsoft, Twitter or Facebook.
  • ApplicationDbContext: It is the Entity Framework code-first DbContext class to create the database with necessary tables. It is extended from IdentityDbContext, the DbContext class defined in Microsoft.AspNet.Identity.EntityFramework namespace. It includes DbSets for User and Role. This class allows us to add our own DbSets to the database being created.

Once the application is executed, a SQL server MDF file is created with the following tables:




User management:
To manage the users and their data, the application uses the following two classes:

  • UserStore: A class in Microsoft.AspNet.Identity.EntityFramework namespace. It is responsible for all database operations related to managing users. It needs a DbContext for its work; we generally pass in an instance of the IdentityDbContext. This class implements six interfaces: IUserStore, IUserPasswordStore, IUserLoginStore, IUserClaimStore, IUserRoleStore and IUserSecurityStampStore. Each of them has a specific purpose. IUserStore is for creating, finding, updating and deleting the user information; IUserPasswordStore is for managing password and so on. While writing a custom Identity system, IUserStore is the least interface to be implemented. All methods declared in all of these interfaces are asynchronous and they return Task.
  • UserManager: This class is defined in the Microsoft.AspNet.Identity.Core assembly. It needs an instance of IUserStore type. The UserManager can be viewed as a repository that calls specific methods of UserStore to manage users for the application. As in UserStore, methods of UserManager are also asynchronous.
The UserManager is instantiated as follows:
var UserManager = new UserManager<ApplicationUser>(new UserStore<ApplicationUser>(new ApplicationDbContext()));

Let’s see how a new user registration works. In the AccountController of the MVC project, we find the following code in the Register action method (Similar code can be found in Register.aspx.cs of Web Forms project):
var user = new ApplicationUser() { UserName = model.UserName };
var result = await UserManager.CreateAsync(user, model.Password);
if (result.Succeeded)
{
    await SignInAsync(user, isPersistent: false);
    return RedirectToAction("Index", "Home");
}

The UserManager.CreateAsync() method calls UserStore.CreateAsync to store the user information in Database. Once the registration is succeeded, a call to SignInAsync is made to authenticate the user. Following is the code inside SignInAsync method:
AuthenticationManager.SignOut(DefaultAuthenticationTypes.ExternalCookie);
var identity = await UserManager.CreateIdentityAsync(user, DefaultAuthenticationTypes.ApplicationCookie);
AuthenticationManager.SignIn(new AuthenticationProperties() { IsPersistent = isPersistent }, identity);

The AuthenticationManager used here is the Authentication object of the current Owin context. The first statement clears any external cookies in the current application context. Then it asks the UserManager to create a claims-based identity for the current user and then the obtained claims-based identity object is set to the current Owin context.

The Login action method differs by just one statement, it is shown below:

var user = await UserManager.FindAsync(model.UserName, model.Password);

The FindAsync method returns null if the user’s credentials are not valid. If credentials are valid, it returns all information about the user. Once the user is found, a claims-based identity object of the user is set to the Owin context.

LogOff is a straight forward implementation. It just calls the SignOut method of the Authentication object of the Owin context:

Context.GetOwinContext().Authentication.SignOut();

Similarly, the template also generates code for changing password, removing user account and to handle external login accounts. We will see how the identity system manages external logins in a future post.

Happy coding!

Friday, 1 November 2013

Using Breeze JS to Consume ASP.NET Web API OData in an Angular JS Application

In last post, we saw how Breeze JS eases the job of querying OData services. It will be a lot of fun to use this great library with our favourite SPA framework, Angular JS. In this post, we will see how to hook up these two libraries to create data rich applications.

As stated in previous post, Breeze required data.js for to understand OData conventions. All functions performing CRUD operations in Breeze return a Q promise. Any changes made to properties of $scope inside then method of Q are not watched automatically by Angular, as 
callbacks hooked up to Q are executed in non-angular world. If the same thing can be done using $q, we don’t have to call $scope.$apply to make the changes visible to Angular’s dirty checking. For this purpose, Breeze team has created a module (use$q). This module can be installed in the project via NuGet: Breeze.Angular.Q. This package adds a JavaScript file to the application, breeze.angular.q.js. Once this file is included and the module is loaded, we don’t need q.js anymore.

Following is the list of scripts to be included on the page:

<script src="Scripts/angular.js"></script>
<script src="Scripts/datajs-1.1.1.js"></script>
<script src="Scripts/breeze.min.js"></script>
<script src="Scripts/breeze.angular.q.js"></script>

We don’t need jQuery anymore as Breeze detects presence of Angular and configures its AJAX adapter to use $http. (Check release notes of Breeze 1.4.4)

As both Angular and Breeze are JavaScript libraries, they can be easily used together. But we can’t enjoy the usage unless we follow the architectural constraints of Angular JS. The moment we include Breeze.js in an HTML page, the JS object “breeze” is available in the global scope. It can be used directly in any of the JavaScript components, Angular controllers and services are no exceptions. Best way to use an object in any Angular component is through Dependency Injection. Any global object can be made injectable by creating a value.
var app=angular.module(‘myApp’, []);
app.value(‘breeze’, breeze);
app.service(‘breezeDataSvc’, function($q, breeze){
 //logic in the service
});

We need to ask breeze to use $q as soon as the Angular JS application kicks off. For this, we need to register the following run block:
app.run(['$q','use$q', function ($q, use$q) {
    use$q($q);
}]);
Breeze has to be configured to work with OData service and use Angular’s AJAX API instead of jQuery. It is done by the following statement:
breeze.config.initializeAdapterInstances({ dataService: "OData" });

Now all we need to do is instantiate an EntityManager and start querying. Following is the complete implementation of the service including a function that does a basic OData request:
app.service('breezeDataSvc', function (breeze, $q) {
    breeze.config.initializeAdapterInstances({ dataService: "OData" });
            
    var manager = new breeze.EntityManager("/odata/");
    var entityQuery = new breeze.EntityQuery();
            
    this.basicCustomerQuery = function () {
        var deferred = $q.defer();
        manager.executeQuery(entityQuery.from("Customers").where("FirstName", "contains", "M")) .then(function (data) {
            deferred.resolve(data.results);
        }, function (error) {
            deferred.reject(error);
        });
        return deferred.promise;
    };
});

Following is a simple controller that uses the above service and sets the obtained results to a property in scope:
app.controller('SampleCtrl', function ($scope, breezeDataSvc) {
    function initialize() {
        breezeDataSvc.basicCustomerQuery().then(function (data) {
            $scope.customers = data.results;
        }, function (err) {
            alert(err.message);
        });
    };

    initialize();
});

Run this page on browser and see the behaviour.

Update: This post was updated on 11th January 2014 as per Breeze Angular Q-Promises page in documentation of Breeze

Happy coding!

Tuesday, 29 October 2013

Querying ASP.NET Web API OData using Breeze JS

A few days back I blogged about the query options supported by ASP.NET Web API OData. Though we get a number of options to query data from clients through REST-based URLs, building the URLs at runtime is not an easy task. One of the very popular ways to call services from rich JavaScript applications is through jQuery AJAX. While jQuery AJAX abstracts the pain of dealing with browser and configuring parameters, it expects an absolute URL. Writing a set of fixed URLs can be easy, but in larger applications we will have a number of scenarios where we need to build most part of an OData URL based on decisions. We can’t inspect any of the URLs unless a request is sent to them. It would be good to have an abstraction that generates the queries for us.

Breeze JS is the right library in such case. Breeze is built with OData query standards. But it is not limited to querying OData. Breeze can manage complex object graphs, cache data, query cached local objects, save complex objects to server, and perform validations and more. Breeze solves all the problems that one might face while working with data in rich JavaScript applications. In this post, we will see how Breeze simplifies querying OData services. We will explore some other essential features Breeze in future posts. To learn more about Breeze, make sure to check their official documentation and interactive tutorials on Breeze website.

Breeze uses jQuery for AJAX and Q for promises. Breeze needs data.js to talk to OData sources. Getting these scripts in Visual Studio is easy through the following NuGet packages:


Now add references to these libraries on the page.

<script src="~/Scripts/jquery-1.9.1.min.js"></script>
<script src="~/Scripts/datajs-1.1.1.min.js"></script>
<script src="~/Scripts/q.min.js"></script>
<script src="~/Scripts/breeze.min.js"></script>


Breeze heavily uses metadata of the data structure on which it has to work. ASP.NET Web API OData service exposes the metadata through its endpoint. But we need to set an important property to the OData configuration, it is Namespace. Adding the following statement to the Web API OData endpoint configuration:
modelBuilder.Namespace = "WebAPI_EF_OData.Models";

Make sure to check Brian Noyes’ step-by-step tutorial on Consuming ASP.NET Web API OData using Breeze. Brian does a nice job by explaining each step in detail.

Let’s set up Breeze on the client side. Add the following code to the page in which you want to perform breeze operations:

$(function () {
    var baseAddress = "/odata";
    breeze.config.initializeAdapterInstances({ dataService: "OData" });
    var manager = new breeze.EntityManager(baseAddress);
});

Now we can start using Breeze’s querying capabilities against the OData endpoint. Breeze uses LINQ-like operators to specify conditions on the data source. Following snippet shows a basic Breeze request to the EntitySet Customers and captures the response once it gets from the server:
var query = breeze.EntityQuery.from("Customers");
manager.executeQuery(query, function (data) {
    //Manipulate UI
}, function (err) {
    //Show Error Message
});

The above query sends a request to the URL http://localhost:/odata/Customers. Following query applies a simple condition on the above query:
var queryWithCondition = breeze.EntityQuery.from("Customers")
                                           .where("ContactTitle", "equals", "Owner");

This query corresponds to the URL http://localhost:/odata/Customers?$filter=ContactTitle eq 'Owner'. As stated earlier, Breeze is capable of expressing any OData URL. It has operators defined for orderby, checking length of strings, substrings, date operators, querying data as pages, expanding navigation properties and many others. Following listing shows a set of OData URLs and their corresponding Breeze queries.
// http://localhost:<port-no>/odata/Customers?$filter=startswith(ContactName,'Ana') eq true
var queryStartsWith = breeze.EntityQuery.from("Customers")
                                        .where("ContactName", "startsWith", "Ana");

// http://localhost:<port-no>/odata/Customers?$filter= not startswith(ContactName,'Ana') eq true
var Predicate = breeze.Predicate;
var predicate = new Predicate("ContactName", "startsWith", "Ana").not();
var queryNotStartsWith = breeze.EntityQuery.from("Customers")
                                           .where(predicate);

// http://localhost:<port-no>/odata/Customers?$filter=substringof('ill',CompanyName) eq true
var queryContainsSubstring = breeze.EntityQuery.from("Customers")
                                               .where("CompanyName", "contains", "ill");

// http://localhost:<port-no>/odata/Customers?$filter=length(ContactName) gt 10 and length(ContactName) lt 20
var queryCheckingLength = breeze.EntityQuery.from("Customers")
                                                           .where("length(ContactName)", "greaterThan", "10")
                                                            .where("length(ContactName)", "lessThan", "20");

// http://localhost:<port-no>/odata/Customers?$orderby=Country
var queryOrderBy = breeze.EntityQuery.from("Customers")
                                     .orderBy("Country");

// http://localhost:<port-no>/odata/Customers?$top=10
var queryTop10 = breeze.EntityQuery.from("Customers")
                                   .top(10);

// http://localhost:<port-no>/odata/Customers?$skip=40&$top=10
var queryTopAndSkip = breeze.EntityQuery.from("Customers")
                                        .top(10).skip(40);

// http://localhost:<port-no>/odata/Customers?$inlinecount=allpages
var queryInlineCount = breeze.EntityQuery.from("Customers")
                                         .inlineCount(true);

// http://localhost:<port-no>/odata/Customers?$expand=Orders/Employee
var queryExpand = breeze.EntityQuery.from("Customers")
                                    .expand("Orders/Employee");

// http://localhost:<port-no>/odata/Customers?$select=CustomerID,ContactName
var querySelect = breeze.EntityQuery.from("Customers")
                                    .select("CustomerID, ContactName");

This is not the end. I created this list as it will serve as a one stop reference for me and hopefully for you as well! Most of the operators look and behave like LINQ operators. Check API documentation on the official site to get details on each of the operators used above.

Happy coding!