Browsed by
Month: April 2016

Web API – message handlers – usage

Web API – message handlers – usage

The delegating handler in WebAPI is the most basic mechanism to intercept HTTP message lifecycle. We can find a very clear and useful visualisation of on WebAPI poster. We can see that message handlers is the first place in HTTP request processing which is able to read of modify the message. It is many cases when we would need to place some code before request will be executed and after. But first, we will introduce how to write that kind of handlers.

Create your own Web API delegating handler

There are two kind of that handlers:

  • global
  • route scoped

On each case we have to define our handler that extend DelegatingHandler class. If we would like to write custom behaviour to whole message processing we can extend HttpMessageHandler class and override method SendAsync. In our case we want to just intercept standard message processing.

public class CustomHandler : DelegatingHandler
{
    protected async override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request, CancellationToken cancellationToken)
    {
        Debug.WriteLine("Process request");
        // Call the inner handler.
        var response = await base.SendAsync(request, cancellationToken);
        Debug.WriteLine("Process response");
        return response;
    }
}

As you can see in this code, we have a place for actions before request processing and after. We can operate although on request and response. We can also completely modify the behaviour of WebAPI. In our example we just write some messages to log during the processing. I want to warn you in this point that we should be fully aware of performance issues in that kind of handlers. All actions which we decide to put into handler, will be executed for each request. That’s why we shouldn’t write too complex processing here. We shouldn’t but we can if we have a good reason.

Global

Message handlers can be configured as a global, which means that they will be executed for each action in system.

GlobalConfiguration.Configuration
    .MessageHandlers
    .Add(new DateObsessedHandler());

Route scoped

Also we can define them specific to route simply by defining additional parameter during route creation.

IHttpRoute route = config.Routes.CreateRoute(
    routeTemplate: "api/MyRoute",
    defaults: new HttpRouteValueDictionary("route"),
    constraints: null,
    dataTokens: null,
    parameters: null,
    handler: new CustomHandler());

config.Routes.Add("MyRoute", route);

This is all options to create a message handler. This is really simple, but powerful mechanism.

Usage

In this part we will see some examples how we can use message handlers for common problems. Below I show only a sample implementation of this examples. Further details can be found in related blog posts.

Message logging

The most basic use of message handlers is logging information about requests.

Description: http://weblogs.asp.net/fredriknormen/log-message-request-and-response-in-asp-net-webapi

public class MessageLoggingHandler : MessageHandler
{
    protected override async Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message)
    {
        await Task.Run(() =>
            Debug.WriteLine(string.Format("{0} - Request: {1}rn{2}", correlationId, requestInfo, Encoding.UTF8.GetString(message))));
    }


    protected override async Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message)
    {
        await Task.Run(() =>
            Debug.WriteLine(string.Format("{0} - Response: {1}rn{2}", correlationId, requestInfo, Encoding.UTF8.GetString(message))));
    }
}

Authentication

Other type of handlers can be a part of authentication functionality. We can check Autorization header globally for all requests.

Description: https://weblog.west-wind.com/posts/2013/Apr/30/A-WebAPI-Basic-Authentication-MessageHandler

public class BasicAuthenticationHandler : DelegatingHandler
{
    private const string WWWAuthenticateHeader = "WWW-Authenticate";

    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, 
    CancellationToken cancellationToken)
    {
        var credentials = ParseAuthorizationHeader(request);

        if (credentials != null)
        {
            var identity = new BasicAuthenticationIdentity(credentials.Name, credentials.Password);
            var principal = new GenericPrincipal(identity, null);

            Thread.CurrentPrincipal = principal;
            if (HttpContext.Current != null)
                HttpContext.Current.User = principal;
        }

        return base.SendAsync(request, cancellationToken)
            .ContinueWith(task =>
            {
                var response = task.Result;
                if (credentials == null && response.StatusCode == HttpStatusCode.Unauthorized)
                Challenge(request, response);

                return response;
            });
    }

    /// <summary>
    /// Parses the Authorization header and creates user credentials
    /// </summary>
    /// <param name="actionContext"></param>
    protected virtual BasicAuthenticationIdentity ParseAuthorizationHeader(HttpRequestMessage request)
    {
        string authHeader = null;
        var auth = request.Headers.Authorization;
        if (auth != null && auth.Scheme == "Basic")
            authHeader = auth.Parameter;

        if (string.IsNullOrEmpty(authHeader))
            return null;

        authHeader = Encoding.Default.GetString(Convert.FromBase64String(authHeader));

        var tokens = authHeader.Split(':');
        if (tokens.Length < 2)
            return null;

        return new BasicAuthenticationIdentity(tokens[0], tokens[1]);
    }


    /// <summary>
    /// Send the Authentication Challenge request
    /// </summary>
    /// <param name="message"></param>
    /// <param name="actionContext"></param>
    void Challenge(HttpRequestMessage request, HttpResponseMessage response)
    {
        var host = request.RequestUri.DnsSafeHost;                    
        response.Headers.Add(WWWAuthenticateHeader, string.Format("Basic realm="{0}"", host));
    }
}

Checking API keys

Sometimes when we publish API as a public service, it can be useful to add API key functionality to restrict access to our API.

Description: http://www.asp.net/web-api/overview/advanced/http-message-handlers

public class ApiKeyHandler : DelegatingHandler
{
    public string Key { get; set; }

    public ApiKeyHandler(string key)
    {
        this.Key = key;
    }

    protected override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (!ValidateKey(request))
        {
            var response = new HttpResponseMessage(HttpStatusCode.Forbidden);
            var tsc = new TaskCompletionSource<HttpResponseMessage>();
            tsc.SetResult(response);    
            return tsc.Task;
        }
        return base.SendAsync(request, cancellationToken);
    }

    private bool ValidateKey(HttpRequestMessage message)
    {
        var query = message.RequestUri.ParseQueryString();
        string key = query["key"];
        return (key == Key);
    }
}

Requests rate limiting

Other functionality of API that suits well to message handlers is rate limiting. It is useful to protect API from to many requests made by single users, which may cause system delays.

Description: http://blog.maartenballiauw.be/post/2013/05/28/Throttling-ASPNET-Web-API-calls.aspx

public class ThrottlingHandler
    : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        var identifier = request.GetClientIpAddress();

        long currentRequests = 1;
        long maxRequestsPerHour = 60;

        if (HttpContext.Current.Cache[string.Format("throttling_{0}", identifier)] != null)
        {
            currentRequests = (long)System.Web.HttpContext.Current.Cache[string.Format("throttling_{0}", identifier)] + 1;
            HttpContext.Current.Cache[string.Format("throttling_{0}", identifier)] = currentRequests;
        }
        else
        {
            HttpContext.Current.Cache.Add(string.Format("throttling_{0}", identifier), currentRequests,
                null, Cache.NoAbsoluteExpiration, TimeSpan.FromHours(1),
                CacheItemPriority.Low, null);
        }

        Task<HttpResponseMessage> response = null;
        if (currentRequests > maxRequestsPerHour)
        {
            response = CreateResponse(request, HttpStatusCode.Conflict, "You are being throttled.");
        }
        else
        {
            response = base.SendAsync(request, cancellationToken);
        }

        return response;
    }

    protected Task<HttpResponseMessage> CreateResponse(HttpRequestMessage request, HttpStatusCode statusCode, string message)
    {
        var tsc = new TaskCompletionSource<HttpResponseMessage>();
        var response = request.CreateResponse(statusCode);
        response.ReasonPhrase = message;
        response.Content = new StringContent(message);
        tsc.SetResult(response);
        return tsc.Task;
    }
}

https://github.com/stefanprodan/WebApiThrottle

Additional resources:

http://www.strathweb.com/2012/05/implementing-message-handlers-to-track-your-asp-net-web-api-usage/
http://blog.karbyn.com/index.php/message-handlers-in-web-api/

Photo source

Postman – powerful API testing tool

Postman – powerful API testing tool


Source

During the API creating work, it is necessary to test API calls. We can do this in many ways. One of the good method could be just using browser to enter url in address bar. However using this method we can test only GET requests. Also there is problems with setting up headers.

Other method to test HTTP requests is using cURL format and appropriate software https://curl.haxx.se/download.html. This is a console program, so it is very extensible and powerful. However it is a console program and it is more complicated to use and we should manage request database on our own. It can be easier.

Fortunately it exists a program Postman. It is a software for all kind of HTTP request work. It can be installed as a standalone application and as a Chrome plug in.

Features

Postman has many different features. All of them are related to executing HTTP requests and to manage them. We will try to list it below.

Running HTTP requests

The most basic functionality is executing requests.

This view is very simple, but very powerful. Generally, this few buttons gives us a possibility to define request options in many different ways. On above screen we can see a following parameters:

  1. Choose HTTP method
  2. Define request address
  3. Defines query parameters (e.g ?id=12)
  4. Execute a HTTP request
  5. Save it in a library
  6. Set Authorization methods and data
  7. Define request headers. We can set up presets with templates of the most useful header combinations (e.g. for authorization or to define request context)
  8. Request body for POST, PUT, etc. request method
  9. Script for testing purpose (before test)
  10. Script for testing

This completes the possibilities of HTTP requests. As a side note, I admit the UX of this solution. This is very self-discoverable and meaningful interface.

Saving requests

Each request can be saved and organized in collections.

We can create a collections for all requests. It helps to organise items by projects or related topics of work.

We can also share defined collections with out team or even as an appendix to API documentation.

Exporting request definitions

However it is not so convenient to define each request manually. It can work for some debugging purpose, because it is not needed to set up many requests. If we want to test more request or we want to work with some bigger system we can use Postman method to obtain these requests.

Importing from Swagger

The first option is to import API definition. Postman supports Swagger, WADL and RAML definition format.

Let’s take a Swagger format for this example. That file defines a methods and parameters that is required for this action.

"paths": {
    "/pets": {
        "get": {
            "description": "Returns all pets from the system that the user has access to",
                "produces": ["application/json"],
                "responses": {
                    "200": {
                        "description": "A list of pets.",
                        "schema": {
                            "type": "array",
                            "items": {
                                "$ref": "#/definitions/Pet"
                            }
                        }
                    }
                }
            }
        }
    }
},

We can import it using Import button on the top left corner.

Interceptor

The other method to get a sample HTTP request is to use a Postman Interceptor tool. This is a Chrome extension that works with regular Postman program.

After installing it we should turn it on into Chrome

Then we should enable it in Postman

When we do this we can capture all request made by our page in Chrome.

This feature saves us time to manually set all request parameters like headers, authorization or data.

Express Profiler – free tool for SQL Server profiling

Express Profiler – free tool for SQL Server profiling


Source

If we write code working with databases, usually it is beneficial to preview what SQL is used during the system use. It is the most useful when we want to use some ORM like Entity Framework. Each query executed by Entity Framework is transformed into some SQL query, the it is executed on database. Profilers is mostly use to:

  • complexity check of generated SQL query
  • query optimization
  • checking of existence of common querying problems (e.g N+1 problem)

In my current projects I use SQL Server the most often. That’s why one of my favourite tool to profile SQL queries is Express Profiler. It supports all versions of Sql Server Database (also Express).

How to download?

To start using this program we only need to download the proper file and execute given package. You can download it directly from Express Profiler web page: https://expressprofiler.codeplex.com/.

How to use?

If we download this program we can execute it and configure a database inside.

Application configuration is easy and reduced only to setting a proper database connection information. On screen above you can see 4 fields in configuration:

  • name of DB server
  • type of authentication
  • optionally, user and password

That’s all required configuration. After that, we can just press the “Start trace” button. Then we can get back to testing application and perform some operation, which we want to test.

Tracing

We can notice that Express Profiler window will be getting populated.

It is a list of queries executed on the database. For each one of this we can see:

  • complete query text
  • database which they are executed on
  • duration
  • number of reads and writes

This is the basic set of information but it gives us a sufficient level of knowledge to investigate what query is executed.

Filtering

That is the main feature of Express Profiler. But it also provide us some help to maintain the systems where it is many queries executed at a single point of time. We can set up a filtering for multiple parameters (e.g. database name, duration or query text.

This is a very simple tool but very powerful for me. It is also free.

WebAPI integration tests with OWIN

WebAPI integration tests with OWIN

Integration test is used for checking the behaviour of application from interface to the database. In our case it will be used with API. It is most useful to:

  • testing if correctly operate on data
  • perform smoke test (checking if some part of API is working)
  • simulate some types of behaviour on API server

Add OWIN into existing app

In this article we will be talking mostly about WebAPI integration tests. In this framework it is very easy to set up it. But first, we should be able to start API server independently. We can use OWIN Katana standalone server functionality. This is a different type of server used in WebAPI. We can still start our API on IIS server, but OWIN gives us an ability to start server in different way.

We can migrate our project to OWIN very easiliy. To do this it is only necessary to add two references and configure server.

So at the beginning add reference following NuGet libraries:

  • Microsoft.Owin.Host.SystemWeb
  • Microsoft.AspNet.WebApi.Owin

After this operation, the only thing that left is to configure server. We can do this by defining class and tag it wit proper attribute:

[assembly: OwinStartup(typeof(Owasp.AppSensor.Demo.Api.Startup))]
namespace Owasp.AppSensor.Demo.Api
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration httpConfiguration = new HttpConfiguration();
            WebApiConfig.Register(httpConfiguration);
            app.UseWebApi(httpConfiguration);
        }
    }
}

Attribute OwinStartup can be defined in any place of the project because it is an assembly attribute. It defines which class will be used for Owin configuration. Generally we can assume that Configuration method should do exactly the same as defined in Global.asax Application_Start() method. Moreover Application_Start() method won’t be used any more. the only line necessary for us and different form Global.asax is

app.UseWebApi(httpConfiguration);

It basically configure WebAPI framework to work properly with controllers and its whole pipeline.

At the end we can check if all works fine, just by setting a breakpoint into Configuration method. If it break, OWIN is configured.

Basic API test

When we successfully set up OWIN server, we can start working on integration test. In this case we should also add one library before we write first test. This time we will need:

  • Microsoft.Owin.Testing

Now we can create our test very fast. The whole test can be divided into 3 parts:

  • Arrange (start API server)
  • Act (execute API method)
  • Assert (check result of sent request)

This process can be written shortly:

[Test]
public async Task TestGoodMethod()
{
    using (var server = TestServer.Create<Startup>())
    {
        var result = await server.HttpClient.GetAsync("api/book");
        string responseContent = await result.Content.ReadAsStringAsync();
        var entity = JsonConvert.DeserializeObject<List<string>>(responseContent);

        Assert.IsTrue(entity.Count == 3);
    }
}

First line TestServer.Create<Startup>() starts OWIN server using Startup class as a configuration. It is started on some general localhost address. Then we can easily execute a request against this server. Within TestServer we have access to HttpClient to execute any type of request. As a result we will get a string value of complete HTTP response.

That’s all what we would need to test API methods.

http://www.strathweb.com/2013/12/owin-memory-integration-testing/

Mocking object in test

One interesting aspect of integration API testing is mocking test data inside test server. We can make it in this approach also.

I will describe this mechanism with Dependency Injection library in use, because it helps in mocking data very well. In most cases DI system is configured in Global.asax file. So for OWIN server, the configuration can be placed into Setup.Configuration() method.

We can define virtual method to be able to set up our mocks in tests.

public void Configuration(IAppBuilder app)
{
    HttpConfiguration httpConfiguration = new HttpConfiguration();
    WebApiConfig.Register(httpConfiguration);

    var builder = new ContainerBuilder();
    builder.RegisterApiControllers(Assembly.GetExecutingAssembly());
    builder.RegisterWebApiFilterProvider(config);
    builder.RegisterModule<ServicesDependencyModule>();

    ConfigureIoC(builder);
    var container = builder.Build();
    config.DependencyResolver = new AutofacWebApiDependencyResolver(container);


    app.UseWebApi(httpConfiguration);
}

protected virtual void ConfigureIoC(ContainerBuilder builder)
{

}

Thanks to this, we can inherit this class in our test and override ConfigureIoC method to add custom mocks.

public class TestStartup : Startup
{
    protected override void ConfigureIoC(ContainerBuilder builder)
    {
        // set up mocks
        base.ConfigureIoC(builder);
    }
}

More resources:
https://blog.kloud.com.au/2014/10/26/asp-net-web-api-integration-testing-with-one-line-of-code/
http://www.aaron-powell.com/posts/2014-01-12-integration-testing-katana-with-auth.html
http://amy.palamounta.in/blog/2013/08/04/integration-testing-for-asp-dot-net-web-api/

Cover photo



Every day managing knowledge

Every day managing knowledge

Posts in this series:
1. Knowledge transfer
2. Knowledge transfer situations

All the tasks related to transferring knowledge would be much easier if the company manages knowledge in teams properly. It is very important in every day of work. We will describe here, how to do this effectively, starting from the beginning of work for a new employee.

Starting work

At the beginning of work in each company, new employee have a lot of to do before he will be able to start developing: download software, configure workstation or customize tools. Many of these task are very repeatable. Thus installation and configuration processes could be automated.

It can be done using following tools:

In many cases these tools are not enough to automate all required actions (e.g. signing papers). To automate this part we can create an ordered list of actions to perform. Good example could be the Trello company, which creates that kind of list for many typical actions:

https://medium.com/@Liz_Hall1/onboarding-new-hires-with-trello-ecc87e87ffd5#.nlvikoahp
https://trello.com/b/MmaVr9Hw/onboarding-new-hires-public-board

Everyday managing

Important part of knowledge managing in everyday work is to define level of complexity and way how to stay consistent with it. To achieve this, we have to show benefits to our team members. It can motivate them to truly follow the principles. It could also help them to improve these rules by themselves.

Most of the team knowledge is distributed among all team members. The main goal is to transfer these information from individuals to one shared medium. As it was written in previous part, the main properties of good knowledge management system is to store data in shared, simple and easy to access form.

Business knowledge

Storing business understanding in shareable place could save us a lot of time during the support stage. For example time spending on asking business the same questions twice by different team members. The crucial part is a consistent language understandable by both sides: business and developers. The simplest choose for this purpose are visual diagrams.

My favourite diagram is business process flow. It gives a wide view on the primary activity of business and how our system could help them with their work.


https://technet.microsoft.com/en-us/library/dn887193.aspx

There are many examples of business diagrams that could be useful for us. It is mostly depending of particular company and its needs. Good source for inspiration of that kind of visuals is one of architecture frameworks (e.g. TOGAF) http://www.togaf.info/togaf9/togafSlides9/TOGAF-V9-Sample-Catalogs-Matrics-Diagrams-v2.pdf

Soft knowledge

During development there are many moments when developers need to consult something with the business team or other developers. To handle this communication we should also have one common tool. The most popular software to manage it are Confluence and Slack. The first one – Confluence – is more commercial and generally bigger product for a corporate customers. It is very customizable and powerful. It has also quite good and steep pricing politics – friendly for small companies. Whereas Slack is more popular in Open-Source and startups community. Both tools work a little differently, but they are used for the same purposes – storing, managing knowledge and help to communicate with other team members.

Logging guidelines for developers

Logging guidelines for developers


Source

One of the task during my work on AppSensor .NET library was to create a simple implementation of log provider. It was my inspiration to make some research about rules how to create a good logging system.

Think about the purpose

The first thing that is necessary before creating or even configuring is to think what is the purpose of logging. When we might need this information and how it can be useful for future application use. The main reasons to set up logging are as below:

  • errors – we should be aware of all errors appearing in our system, in front-end and back-end side,
  • security – diagnostic information related to access control or authorization can led us to detection of potential attacks,
  • info about problems/unusual conditions – when the system detect some unusual action or it goes to state that might need administration action (i.e. lack of disk space) log can be the source of more detailed information,
  • control process flow – it also can provide an information about a process progress and how its behave.

Stores

Next decision to take is where we want to store log data. We have several possibilities to choose:

  • files on disk – very fast and easy, but not suitable for distributed systems. Also hard to query information within it,
  • database – very useful, easy to query, good for distributed or load balanced systems, little more difficult for configure and need a DB server,
  • external providers (Loggly, Splunk …) – easy to set up, very convenient for querying and reporting, slow because sent over the Internet.

Logging levels

In typical application we want to log many different type of events. If we are talking about the web systems, the volume of users being on-line simultaneously can be very large. From this point of view, the number of logs can be tremendous. That’s why it is so important to set up good levels of detail during this process. Usually there are following levels:

  • Fatal – Highest level: important stuff down
  • Error – For example application crashes / exceptions.
  • Warn – Incorrect behaviour, but the application can continue
  • Info – Normal behaviour like mail sent, user updated profile etc.
  • Debug – Executed queries, user authenticated, session expired
  • Trace – Begin method X, end method X etc.

According to: https://github.com/NLog/NLog/wiki/Log-levels

Good configuration of log levels gives us a parameter to set details level dynamically depending on our needs. More specific logs can give more information, but it takes much more disk space.

Information scope

Depending on the logging purpose we should choose a set of properties which we want to store for each log message. It can be divided in following groups and all 4 information group should be included:

  • when the situations occurs,
  • where in code,
  • who caused this situation,
  • what happened.

Logging scope

The last important part to consider during log configuration is when we want to save a log. Of course it depends mostly on application, but we can define some general rules what we should log:

  • all exceptions,
  • validation failures,
  • authentication success and fail,
  • authorization failures,
  • system events (startup, closing, changing date),
  • use of high-risk functionality (closing operational day).

Of course, logging too much information can be more harmful than helpful. That’s why you have to be sure that you don’t log any of information listed below:

  • duplicated failures,
  • regular data changes,
  • fragile information (security of personal data, passwords),
  • application source code.

Resources

https://www.owasp.org/index.php/Logging_Cheat_Sheet
https://www.owasp.org/index.php/OWASP_Security_Logging_Project
https://developer.atlassian.com/confdev/development-resources/confluence-architecture/logging-guidelines