My other blog (in Hungarian) merhetetlen.blogspot.com

Tuesday 19 March 2013

Model based testing - Osmo

Let's start with tool one: Osmo

Osmo is a java based tool, I started to use it to have more confidence in my tests with connecting them with each other. 
You can download it from here: https://code.google.com/p/osmo/

It is really simple tool. It gives you a few annotations: @Transition, @Guard, @Post, @Pre, etc what you can use to annotate the corresponding methods, and it gives you basic way to generate test scenarios  like completely random, balanced random (so the number of the execution of each transition will be similar).

You can define end conditions, when to finish a test, or a suite (time based, coverage based). And you can easily create your own. It also has other features where you can bind transitions with application features. I won't go into details, only if you are interested in. I will tell you what I tried to achieve with it, and at the end what I decided I will do with it.

First I tried to use it in my API test (it is JSON over HTTP API). Because I was aware of the problem of a too complex model, I decided to cover only the happy paths. So I created a model for each interfaces, I could easily implement the transitions (login, logout, register, create event, etc) and I could use my previously created test user object to track the state. So it was pretty easy, but at the end meaningless... 

why? 

Mostly because the API was so simple and well covered with functional tests, that it was really unlikely that with a slow limited model I could discover anything... Maybe I was wrong, but I already knew there is another tool what I can also use... so I have decided to try out that as well... but it is a different story.

But I really liked the ease of use of OSMO, and I saw its power. So I started to look for a better place for it.

We have another test framework for web GUI. It is based on selenium, and it use jbehave. (maybe I will create another post about how can you integrate another test tool to jbehave). So it is a high level thing, with page object models.

And it was promising... I just closed my eyes, and imagined a test which goes around the whole site randomly, and creates several GUI actions, and continuously checks the states.... and it was a good image. 
Maybe I am alone, but I often faced with issues occur close to but above of the functional tests, like: 
  • after the 2nd page of the registration, if you click back, and next, and back and next you will lose your session 
  • if you open page A and you go to page B and there you log in, and you click on the promotion, and click back, and log out, and open page A again, your session is still there, or you will get a javascript exception, or whatever...
  • you can image more. Common thing: the function works, but the (well, to be honest CRAZY) combination of them does not.
So here, I do not need to be fast, what I want to do is just do whatever I can on the page, and see what happens... So I created a prototype for this, what later we can extend, and basically add another layer of testing. Previously we had tests for pages, and now we can have tests for the whole sites.

And what else can you do with such a model? You can use it for monitoring your application. If you balance the transitions, and make an endless run it will continuously monitor everything (what is in the model) for you. And with the balancing, you can put more focus on the critical components, and less on those things what you do not really care. 

And if your things are in Java, it is really easy to do.

Next time I will tell you about the other model based testing tool....

Friday 1 March 2013

Model based testing - the future?

So there is this thing: model based testing... sometimes they call it property based testing. The concept is simple, instead of writing test cases (which can be really, really inefficient) you just tell to a droid what to do. Of course it is not that simple... the droids nowadays requires a lot of explanations...

In nutshell it means the following: you create a finite state machine which is the expected behavior of you application, and based on that, you can generate tests.
The edge describe the action what you need to do with the system under test, and the node is the expected state, so the content of your assertion. Sounds simple...

My previous experiences with model based testing were...
  • If there is any tool support for visualizing and executing, storing, creating, whatever, it is really expensive, and it tells you the way how you can interact with the system....  you are strongly binded to the tool.
  • After a while, when you try to cover all your requirements with the model you end up with creating a huge, unmaintainable monster, what is more difficult to understand than the implemented system, and the hair stand up on the back of your neck if you just think that you have to change it
  • so you decide to throw it away and implement tests based on your requirements and your inspiration, with a few comments, so everyone can understand what is the feature/method about...
But what do we have now?

In the last few months I started to use two tools... well I am not extremely experienced in any of them, but at least I am familiar with the features it can give to a QA engineer. Both tools are similar in a few points:
  • you use a programming language to implement your model based test
    • so you have to do your own FSM implementation
  • the tool provides you the bricks what you can use to build up your model:
    • a transaction/action where you call a method to change the state of the system under test (aka test call)
    • a post condition where you can check is the state of your application is what you expect (after the log in I have a proper session)
    • a precondition/guard where you can filter those transactions out what you do not want to call with the given state (logout when you are not logged in)
    • next state: where you can change your state (it only exists in one tool, in the other you have to put it in the transaction or in the post condition)
  • and the tool gives you other features to make the execution better (on this level, they are really different)
So the bricks are simple. A lot depends on you, and how you implement this thing. And here the KISS principle is critical.

And there is another thing about model based testing. As a tester we want to think on end-to-end level. We use the requirements to create tests, we try to put ourselves in the user's/customer's mind, we want to discover those cases when something can go wrong. And it is hard to align these things with model based testing...

Why?

Sometimes it is just too much for a model, you must keep it simple!

So I can imagine two strategies to create the model:
  1. Find that area in you application which is the most critical, but relatively small. And apply an extensive model for that (so you have to reduce your scope, to not have a large model).
    1. Benefit? your model won't be complex, and your critical part will be covered. And do not forget, you can have multiple models for different aspects/parts...
  2. Step back, and view the whole system, and collect those interactions what your users will normally do with it, and create a model based on that. So basically you connect the happy paths together, to see if with a long, diverse usage, your application is still flawless. 
    1. Benefit? your model is not complex, because your functional tests will focus on the edge cases, and other parts of the requirements, what is to expensive to cover here.
So that's it for today, in the next posts I will introduce the tools for you, and what I tried (and failed), and finally done (and doing) with it.