Browsed by
Category: Development

Expedition in search of good documentation

Expedition in search of good documentation

documentation

Some time ago I saw an information about new StackOverflow section named Documentation. At the very beginning, I was very excited about this new feature. It sounds great to structure and organizes all the knowledge contained into this portal. Every day thousands of developers contribute to growing this one of the world’s biggest knowledge base. It is a great goal to achieve. However, when I started to look carefully at the content and the form of the knowledge in this documentation I realized that I even don’t know what good documentation means. So I decided to start my journey in search of good documentation.

Why we create documentation?

At first, we have to consider the most important question. Why do we need to bother about documentation? There are two cases:

  • writing a library that other developers may use
  • developing a regular application

We can easily see the difference between these two variants, but the case of documentation they behave almost the same. If we consider that every part of application code need a maintenance, we can say that we write every project for other developers. In some cases, the other developers would be – we some time later. However generally we can assume that independently from the project purpose, we should store our knowledge. It is important especially if we decide to create some custom components – without following to some standards. That kind of documentation should be a great help when a fresh person wants to use it or modify it. It can be also the first place to search business assumptions used in the project.

Who wants to use documentation and when?

Based on the above purposes of creating documentation we can identify 3 personas that may want to use documentation:

  • future me – when we need to make some modification after a year of work in another project
  • fresh developers in the team – when they need to hop into the project and start developing
  • users of our project – when we creating a library of API available for other developers

We focus only on technical documentation here but on the last point we can also include business user documentation. On the other side, API documentation is a bit different subject so I will describe it more detailed way later.

What documentation should include?

This is not an easy question. It mostly depends on our audience and purpose. However, we can definitely select some elementary components.

  1. Instruction to install or start working
  2. Definition of convention in use/contribute
  3. How to use project functions
  4. How to contribute/extend

I know exactly, that this list don’t exhaust all variety of the most important documentation component, but they are definitely the most common ones.

What are the qualities of good documentation?

I reviewed much documentation. Many of them, I also used in my developer’s experience. Based on this knowledge I can specify some main characteristics of usable documentation.

Searchable

We do this all the time when we want to use documentation. We should be able to search all necessary information as quickly as possible. Generally, we can say that this is the most important functionality for documentation. It can by achieved using regular search functionality

documentation_search

or well-organized table of contents

documentation_hierarchy

Provides quick start section

Good designed documentation should be able to help users with different level of knowledge. We should provide a starting easy guide for people who try to use our product for the first time. It is very important to not putting barriers in learning for newcomers.

Documentation should also contain a description of some common parts in the application. For example, it should precise how to handle errors during the use of library and methods to authenticate. All these side notes are very dependent on the context of the project. This is a subject related to typical, common users questions.

Examples

The last, but definitely not the least one characteristic of good documentation are examples. It helps to see how to use some feature in user code. It also enables a way to understand the behavior without writing the function on your own. Especially when we are able to run them straight from the documentation. Examples remove an abstraction layer hidden behind the pure documentation text.

Different approaches to documentation

At the end, I want to share with you, my list of the best documentation which I know

Kendo UI Grid

documentation_kendo

It contains a good search component and well-organized index. It also has many examples, that we can execute in the browser. Very usable and fast to use.

Aurelia

documentation_aurelia

The same as the previous one has great search and good technical documentation. It enthused me with guides section, that contains a set of general articles depending on your role. You can define that you are a developer or manager and read the guides designed specifically for you.

StackOverflow Documentation

documentation_stack

The newest one but the most promising. It is a documentation created by the community, according to the community needs. It has a lot of examples and lives demos in popular subjects. I will definitely follow and contribute to this project in the future.

LESS

documentation_less

The example of quite simple documentation, but very efficient. In fact, in contains all necessary information in the minimalist form. It could only have better search mechanism to tell that it is one of the best.

Knockout.js

documentation_knockout

Similar to LESS documentation. Very simple and clean. The one distinguish a feature is a form of structuring the subjects. All documentation is written and organized as a story starting from the installation to more advanced functionality.

You can find a good description about creating your own documentation here: https://www.sitepoint.com/products-documentation-good-enough/

Cover photo

NUnit – generic classes tests

NUnit – generic classes tests

nunit_generic_classes_tests

Some times ago I faced a task to write tests for generic classes. In the simplest approach it is quite easy task. At the beginning let’s assume that class which we want to test implement following interface:

public interface ISerializer<T>
{
    T Deserialize(string text);

    string Serialize(T obj);
}

If we want to test this class using for example NUnit library we can simply write a few test cases. In our case we decide to write 2 basic tests.

[Test]
public void ShouldSerializeAndDeserializeCorrectly()
{
    var serializer = new Serializer<Cat>();
    var obj = new Cat();

    var serialized = serializer.Serialize(obj);
    var deserialized = serializer.Deserialize(serialized);

    Assert.AreEqual(deserialized, obj);
}

[Test]
public void ShouldDeserializeWithErrorCorrectly()
{
    var serializer = new Serializer<Cat>();
    var obj = new Cat();

    var serialized = serializer.Serialize(obj);
    var deserialized = serializer.Deserialize(serialized + "=");

    Assert.AreEqual(deserialized, obj);
}

As you can see this kind of tests is easy to understand and very simple at all. In this case it is the best choice. However how do you think it will work when we would have a dozen implementation of our interface and we would want to test them all? Or we want to test some implementation with a several classes as a generic parameter. Or imagine that we want both: many implementations tested with many classes. Now of test cases would grow very fast. Ad it could be quite hart to maintain that amount of similar code.

Testing for different type parameters

To achieve this we can use a NUnit functionality named TestCaseSource. It gives us a possibility to define our test cases dynamically.

[TestFixture]
public class TestClass
{
    public static IEnumerable<IGenericTestCase> TestCases()
    {
        yield return new GenericTestCase<Cat>();
        yield return new GenericTestCase<Dog>();
    }

    [Test]
    [TestCaseSource("TestCases")]
    public void ShouldSerializeAdnDeserializeCorrectly(IGenericTestCase testCase)
    {
        testCase.ShouldDeserializeWithErrorCorrectly();
    }

    [Test]
    [TestCaseSource("TestCases")]
    public void ShouldDeserializeWithErrorCorrectly(IGenericTestCase testCase)
    {
        testCase.ShouldDeserializeWithErrorCorrectly();
    }
}

As you can see, we can add a parameters to our tests define how this parameter will be populated using a TestCaseSource attribute. In static method we can create test cases dynamically. But let’s look what is a test parameter. We pass an interface to tests, but we define a specific typed objects into our source data. It looks as follow:

public interface IGenericTestCase
{
    void ShouldSerializeAdnDeserializeCorrectly();

    void ShouldDeserializeWithErrorCorrectly();
}


public class GenericTestCase<T> : IGenericTestCase
where T : new()
{
    public void ShouldSerializeAdnDeserializeCorrectly()
    {
        var serializer = new Serializer<T>();
        var obj = new T();

        var serialized = serializer.Serialize(obj);
        var deserialized = serializer.Deserialize(serialized);

        Assert.AreEqual(deserialized, obj);
    }

    public void ShouldDeserializeWithErrorCorrectly()
    {
        var serializer = new Serializer<T>();
        var obj = new T();

        var serialized = serializer.Serialize(obj);
        var deserialized = serializer.Deserialize(serialized + "=");

        Assert.AreEqual(deserialized, obj);
    }
}

Test login into GenericTestCase is exactly the same as in the previous basic tests. That is correct because the logic is the same. The only thing that we want change is a way of generating test cases.

This method is also very easy but it is the most beneficial if you really want to test generic class in many dimensions.

Probably you noticed that above generic test case support only change of type parameter but it is not a problem to support also different implementations. We have to change just a few places.

public class GenericTestCase<T, TImpl> : IGenericTestCase
    where T : new()
    where TImpl : ISerializer<T>, new()
{
    public void ShouldSerializeAdnDeserializeCorrectly()
    {
        var serializer = new TImpl();
        // ...
    }

    public void ShouldDeserializeWithErrorCorrectly()
    {
        var serializer = new TImpl();
        // ...
    }
}

I hope these examples will help you to understand TestCases mechanism in NUnit.

Photo source

WebAPI integration tests with OWIN

WebAPI integration tests with OWIN

owin_tests

Integration test is used for checking the behaviour of application from interface to the database. In our case it will be used with API. It is most useful to:

  • testing if correctly operate on data
  • perform smoke test (checking if some part of API is working)
  • simulate some types of behaviour on API server

Add OWIN into existing app

In this article we will be talking mostly about WebAPI integration tests. In this framework it is very easy to set up it. But first, we should be able to start API server independently. We can use OWIN Katana standalone server functionality. This is a different type of server used in WebAPI. We can still start our API on IIS server, but OWIN gives us an ability to start server in different way.

We can migrate our project to OWIN very easiliy. To do this it is only necessary to add two references and configure server.

So at the beginning add reference following NuGet libraries:

  • Microsoft.Owin.Host.SystemWeb
  • Microsoft.AspNet.WebApi.Owin

OWIN_WebAPI

After this operation, the only thing that left is to configure server. We can do this by defining class and tag it wit proper attribute:

[assembly: OwinStartup(typeof(Owasp.AppSensor.Demo.Api.Startup))]
namespace Owasp.AppSensor.Demo.Api
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration httpConfiguration = new HttpConfiguration();
            WebApiConfig.Register(httpConfiguration);
            app.UseWebApi(httpConfiguration);
        }
    }
}

Attribute OwinStartup can be defined in any place of the project because it is an assembly attribute. It defines which class will be used for Owin configuration. Generally we can assume that Configuration method should do exactly the same as defined in Global.asax Application_Start() method. Moreover Application_Start() method won’t be used any more. the only line necessary for us and different form Global.asax is

app.UseWebApi(httpConfiguration);

It basically configure WebAPI framework to work properly with controllers and its whole pipeline.

At the end we can check if all works fine, just by setting a breakpoint into Configuration method. If it break, OWIN is configured.

Basic API test

When we successfully set up OWIN server, we can start working on integration test. In this case we should also add one library before we write first test. This time we will need:

  • Microsoft.Owin.Testing

Now we can create our test very fast. The whole test can be divided into 3 parts:

  • Arrange (start API server)
  • Act (execute API method)
  • Assert (check result of sent request)

This process can be written shortly:

[Test]
public async Task TestGoodMethod()
{
    using (var server = TestServer.Create<Startup>())
    {
        var result = await server.HttpClient.GetAsync("api/book");
        string responseContent = await result.Content.ReadAsStringAsync();
        var entity = JsonConvert.DeserializeObject<List<string>>(responseContent);

        Assert.IsTrue(entity.Count == 3);
    }
}

First line TestServer.Create<Startup>() starts OWIN server using Startup class as a configuration. It is started on some general localhost address. Then we can easily execute a request against this server. Within TestServer we have access to HttpClient to execute any type of request. As a result we will get a string value of complete HTTP response.

That’s all what we would need to test API methods.

http://www.strathweb.com/2013/12/owin-memory-integration-testing/

Mocking object in test

One interesting aspect of integration API testing is mocking test data inside test server. We can make it in this approach also.

I will describe this mechanism with Dependency Injection library in use, because it helps in mocking data very well. In most cases DI system is configured in Global.asax file. So for OWIN server, the configuration can be placed into Setup.Configuration() method.

We can define virtual method to be able to set up our mocks in tests.

public void Configuration(IAppBuilder app)
{
    HttpConfiguration httpConfiguration = new HttpConfiguration();
    WebApiConfig.Register(httpConfiguration);

    var builder = new ContainerBuilder();
    builder.RegisterApiControllers(Assembly.GetExecutingAssembly());
    builder.RegisterWebApiFilterProvider(config);
    builder.RegisterModule<ServicesDependencyModule>();

    ConfigureIoC(builder);
    var container = builder.Build();
    config.DependencyResolver = new AutofacWebApiDependencyResolver(container);


    app.UseWebApi(httpConfiguration);
}

protected virtual void ConfigureIoC(ContainerBuilder builder)
{

}

Thanks to this, we can inherit this class in our test and override ConfigureIoC method to add custom mocks.

public class TestStartup : Startup
{
    protected override void ConfigureIoC(ContainerBuilder builder)
    {
        // set up mocks
        base.ConfigureIoC(builder);
    }
}

More resources:
https://blog.kloud.com.au/2014/10/26/asp-net-web-api-integration-testing-with-one-line-of-code/
http://www.aaron-powell.com/posts/2014-01-12-integration-testing-katana-with-auth.html
http://amy.palamounta.in/blog/2013/08/04/integration-testing-for-asp-dot-net-web-api/

Cover photo



Logging guidelines for developers

Logging guidelines for developers

logging
Source

One of the task during my work on AppSensor .NET library was to create a simple implementation of log provider. It was my inspiration to make some research about rules how to create a good logging system.

Think about the purpose

The first thing that is necessary before creating or even configuring is to think what is the purpose of logging. When we might need this information and how it can be useful for future application use. The main reasons to set up logging are as below:

  • errors – we should be aware of all errors appearing in our system, in front-end and back-end side,
  • security – diagnostic information related to access control or authorization can led us to detection of potential attacks,
  • info about problems/unusual conditions – when the system detect some unusual action or it goes to state that might need administration action (i.e. lack of disk space) log can be the source of more detailed information,
  • control process flow – it also can provide an information about a process progress and how its behave.

Stores

Next decision to take is where we want to store log data. We have several possibilities to choose:

  • files on disk – very fast and easy, but not suitable for distributed systems. Also hard to query information within it,
  • database – very useful, easy to query, good for distributed or load balanced systems, little more difficult for configure and need a DB server,
  • external providers (Loggly, Splunk …) – easy to set up, very convenient for querying and reporting, slow because sent over the Internet.

Logging levels

In typical application we want to log many different type of events. If we are talking about the web systems, the volume of users being on-line simultaneously can be very large. From this point of view, the number of logs can be tremendous. That’s why it is so important to set up good levels of detail during this process. Usually there are following levels:

  • Fatal – Highest level: important stuff down
  • Error – For example application crashes / exceptions.
  • Warn – Incorrect behaviour, but the application can continue
  • Info – Normal behaviour like mail sent, user updated profile etc.
  • Debug – Executed queries, user authenticated, session expired
  • Trace – Begin method X, end method X etc.

According to: https://github.com/NLog/NLog/wiki/Log-levels

Good configuration of log levels gives us a parameter to set details level dynamically depending on our needs. More specific logs can give more information, but it takes much more disk space.

Information scope

Depending on the logging purpose we should choose a set of properties which we want to store for each log message. It can be divided in following groups and all 4 information group should be included:

  • when the situations occurs,
  • where in code,
  • who caused this situation,
  • what happened.

Logging scope

The last important part to consider during log configuration is when we want to save a log. Of course it depends mostly on application, but we can define some general rules what we should log:

  • all exceptions,
  • validation failures,
  • authentication success and fail,
  • authorization failures,
  • system events (startup, closing, changing date),
  • use of high-risk functionality (closing operational day).

Of course, logging too much information can be more harmful than helpful. That’s why you have to be sure that you don’t log any of information listed below:

  • duplicated failures,
  • regular data changes,
  • fragile information (security of personal data, passwords),
  • application source code.

Resources

https://www.owasp.org/index.php/Logging_Cheat_Sheet
https://www.owasp.org/index.php/OWASP_Security_Logging_Project
https://developer.atlassian.com/confdev/development-resources/confluence-architecture/logging-guidelines