My other blog (in Hungarian) merhetetlen.blogspot.com

Saturday 18 May 2013

Abstracting abstract abstractions

Even on Dagobah there is no such a thing: enough rain.

Abstractions are good. We can simplify things based on the way how we want to use it. The last time when I was on a QA conference on our one and only Death Star in the Nether sector, there were a whole presentation about how can we create a good framework with creating layers of abstractions.

Lets take an example:
    You have to test a backend user management system, which has a REST API(or SOAP, or json over http or basically whatever). When you have to test it you will create an abstraction for sure which maps your tests to actual http calls somehow. and hopefully you won't make http calls from your test methods every time... This is layer 1.

    After a while, when you get bored to set up the same object over and over again when you want to log in with a test user, or register, or do something you will decide to create another abstraction with util methods for those commonly used actions or group of actions you have to perform over and over again. This is layer 2.

   After a few months or after adding much more tests or after covering many many new features you realize even with these util methods sometimes it is painful (in terms of lines of code, or in terms of readability / maintainability) to set up complex preconditions. So you decide you need another layer. But it is not obvious what.

    Maybe you already have a class file which represents your test users. So it can hold basic or not basic info like: first name, last name, dob, email, password, number of dogs, IQ etc... And maybe you will start to wonder: if I have a list (or something like a list) of attributes what a user can have, why do not I have a list of actions what a user can perform, or can take. You can have things like: login, register, send message to someone, get banned, fly to the moon, get fired, get attacked by Jedis etc...
    One of the best way to list things in java is to create an enum. Another benefit is, you can define an abstract method, what every enum must implement, something like: doAction(TestUser). So in you enum thing, you can list your actions, and implement the required api calls to achieve that action (use things from previous layer). And in your test user object you can do a method like: doActions(MyOwnActions... action) and you can list those things what you want to perform with your test user. This is layer 3. and it is really handy:
user.doActions(REGISTER, LOGIN, GO_TO_MOON, KILL_A_JEDI, EAT_BAKLAVA);

    So we have 3 layers for a simple api testing which abstracts the http calls to a level where you can just list what you want to do and your test user will be in that state. But what happens if someone testing the GUI for example needs your test user thingy. You can tell him to do what you did, so before the tests create a test user instance and set it up with the required actions and so on.
It means they have to change or have the same kind of things on multiple place (of course depends on, maybe you have to keep your test class hierarchy flat, and you do not need users in every test). So maybe in that framework it can be ugly to set up the test users everywhere, and you want to provide a nice way to just get what you want.
    You already have a static abstraction about the actions what your test users can perform, so you can easily create dependency injections.
    Well I know that this example is not that common, but in my case we cannot just inject the dependency with guice or something... here, in the tests the test user depends on runtime parameters, and you need a given and often various state of the user.
    What can you do? You can create an annotation like: @TUser(actions={Action1, Action3}) and if you use testng you can override the IMethodInterceptor so you can create a test user into the annotated field based on some parameters (from the test instance, or runtime things) you decide to use. So you have abstracted the test user initialization, and this is layer 4.

...well, when I implemented this I thought there are at least 8 level of abstractions .. it's only 4, but the message is clear, if things are getting complicated and you have lot of duplication do not afraid to take a step back, look it from a new point of view, and put those things what you need to another layer.

...sorry for the half broken language, my brain is half off....
 

Gungans are everywhere

Maybe it is just me, but I often get annoyed by the dozens of gungans who appeared around me.... they are everywhere...

I am on a user's and dev's list of a testing tool and each and every day I see a post from a gungan... it is a safe bet that the post has been sent from a gungan when:

  • he sends his question to both list, he did not bother to read the purpose of the mailing lists, of course when someone tells him, not to do so, he appologies and promise to not do it again (it does not really matter, because someone else will do the some next day)
  • his question is
    • pointless, meaningless, I mean I am not an expert of the high galactic language but at least I try to describe my problem as simple and detailed as possible (true, not often perfect), but here you have to call a protocol droid to understand the message.
    • extremely generic, I mean so generic at the first time you think it is a joke
    • completely unrelated to the tool, so you guess he was just using that tool in a different window when that problem occurred in a completely different application in another window/screen/battle ship
    • unrelated to the tool, so the problem is with something what this tool is using and it is clear from the error message, or whatever
  • his email is
    • without details... so basically you have no idea what he is talking about, but he experienced something with that tool
    • has a lot of details... so you get the description of the problem, the log files, the content of his HDD, his mail history, all together is bigger than 1 petabyte.
    • has enough details but copy pasted to the mail with broken format, so no one can actually understand it (and sometimes the last two are combined)
  • his problem is
    • easily solvable, so if he read the error message, he could see (hopefully) what is the problem and how to solve it, because it explicitly tells him where is the problem, what to do, it is even gives a nice recipe for his date tomorrow....
    • already answered 10 billion times
    • can be solved after 20 second of galactic network searching...
  • his emails
    • are coming sequentially
      • in every half a day about the same problem, in the same thread 
      • in every day about the same problem in different thread
      • in every minutes about different "problems" in different threads (ok I know, at least it is a proper problem/thread usage)
I have asked a lot of stupid questions as well (and I will), but I always tried to find my answers before, I even tried to debug the application, sacrifice a wookie just to not ask, before I did so....

... and only a gungan can call himself in public as something like: Emperor certified jedi slayer professional after 2 mission happened on a deserted planet hunting scorpions...

Thursday 25 April 2013

Flashbacks

    It is always good if you station in a peaceful quadrant of the galaxy. No annoying battles with the Rebels, you do not need to wear your white helmet with you cannot breath... but sometimes even here, you have to jump back to the front line and fight.

    So what's happened? I had to test web GUI again after 1 year of peaceful API testing. It was not a usual test, it was more simple, I just had to load a few page a few times... you cannot wish an easier task to do...

Good old problems I have faced with:
  • a page just does not work in HtmlUnit, there is a meaningless JS exception in the log and it is impossible to figure out what went wrong and where and why. And of course everything works perfectly in a real browser (=slow, slow, slow)
  • So you have to use a real browser, but:
    • you cannot set any cookie you want. Why? Because not! Details? An average user could not do it as well... thanks! This is not an average user emulator, but a testing framework, please! That's the reason why you cannot get response status codes, delete the cache (not sure...) and other useless, unwanted features
    • you cannot use the domain and the expiration date even if you are on that page. It only works with name=value... no comment
    • Chrome hangs on loading page A Firefox will hang for sure on page B... so you cannot use any of them without hacking...
    • You set the pageloadtimeout and realize it is not supported by Chrome. No it is not important at all... the default 2 eons is absolutely fine. But do not worry, you can use a plugin in Chrome which does the same, but:
      • you cannot get the crx file.... 
        • well you can get it but it will take you a while, it is not easy to find at all, and you have to be fast and spend a lot of time to figure out how can you get it
      • finally you get it and realize --load-extension does not like it, so at the end the folder what was always there is enough... no worries I am paid by time...
So it is fun.... pure fun. But anyway, what I can suggest to you is never test a well-written site in HtmlUnit, using all the features of it, like listeners and the Emperor  knows what else (it was long ago, in a faraway....), and you would never think that should be a standard for a web test framework.... 

...but for the Force sake, if your test framework does not come with AI and it is not enough to show the site to it and tell a few things what it should listen on, do not block our access to the internals. This is our profession we know what we want to do and why... at least me and my imaginary friends

Tuesday 19 March 2013

Model based testing - Osmo

Let's start with tool one: Osmo

Osmo is a java based tool, I started to use it to have more confidence in my tests with connecting them with each other. 
You can download it from here: https://code.google.com/p/osmo/

It is really simple tool. It gives you a few annotations: @Transition, @Guard, @Post, @Pre, etc what you can use to annotate the corresponding methods, and it gives you basic way to generate test scenarios  like completely random, balanced random (so the number of the execution of each transition will be similar).

You can define end conditions, when to finish a test, or a suite (time based, coverage based). And you can easily create your own. It also has other features where you can bind transitions with application features. I won't go into details, only if you are interested in. I will tell you what I tried to achieve with it, and at the end what I decided I will do with it.

First I tried to use it in my API test (it is JSON over HTTP API). Because I was aware of the problem of a too complex model, I decided to cover only the happy paths. So I created a model for each interfaces, I could easily implement the transitions (login, logout, register, create event, etc) and I could use my previously created test user object to track the state. So it was pretty easy, but at the end meaningless... 

why? 

Mostly because the API was so simple and well covered with functional tests, that it was really unlikely that with a slow limited model I could discover anything... Maybe I was wrong, but I already knew there is another tool what I can also use... so I have decided to try out that as well... but it is a different story.

But I really liked the ease of use of OSMO, and I saw its power. So I started to look for a better place for it.

We have another test framework for web GUI. It is based on selenium, and it use jbehave. (maybe I will create another post about how can you integrate another test tool to jbehave). So it is a high level thing, with page object models.

And it was promising... I just closed my eyes, and imagined a test which goes around the whole site randomly, and creates several GUI actions, and continuously checks the states.... and it was a good image. 
Maybe I am alone, but I often faced with issues occur close to but above of the functional tests, like: 
  • after the 2nd page of the registration, if you click back, and next, and back and next you will lose your session 
  • if you open page A and you go to page B and there you log in, and you click on the promotion, and click back, and log out, and open page A again, your session is still there, or you will get a javascript exception, or whatever...
  • you can image more. Common thing: the function works, but the (well, to be honest CRAZY) combination of them does not.
So here, I do not need to be fast, what I want to do is just do whatever I can on the page, and see what happens... So I created a prototype for this, what later we can extend, and basically add another layer of testing. Previously we had tests for pages, and now we can have tests for the whole sites.

And what else can you do with such a model? You can use it for monitoring your application. If you balance the transitions, and make an endless run it will continuously monitor everything (what is in the model) for you. And with the balancing, you can put more focus on the critical components, and less on those things what you do not really care. 

And if your things are in Java, it is really easy to do.

Next time I will tell you about the other model based testing tool....

Friday 1 March 2013

Model based testing - the future?

So there is this thing: model based testing... sometimes they call it property based testing. The concept is simple, instead of writing test cases (which can be really, really inefficient) you just tell to a droid what to do. Of course it is not that simple... the droids nowadays requires a lot of explanations...

In nutshell it means the following: you create a finite state machine which is the expected behavior of you application, and based on that, you can generate tests.
The edge describe the action what you need to do with the system under test, and the node is the expected state, so the content of your assertion. Sounds simple...

My previous experiences with model based testing were...
  • If there is any tool support for visualizing and executing, storing, creating, whatever, it is really expensive, and it tells you the way how you can interact with the system....  you are strongly binded to the tool.
  • After a while, when you try to cover all your requirements with the model you end up with creating a huge, unmaintainable monster, what is more difficult to understand than the implemented system, and the hair stand up on the back of your neck if you just think that you have to change it
  • so you decide to throw it away and implement tests based on your requirements and your inspiration, with a few comments, so everyone can understand what is the feature/method about...
But what do we have now?

In the last few months I started to use two tools... well I am not extremely experienced in any of them, but at least I am familiar with the features it can give to a QA engineer. Both tools are similar in a few points:
  • you use a programming language to implement your model based test
    • so you have to do your own FSM implementation
  • the tool provides you the bricks what you can use to build up your model:
    • a transaction/action where you call a method to change the state of the system under test (aka test call)
    • a post condition where you can check is the state of your application is what you expect (after the log in I have a proper session)
    • a precondition/guard where you can filter those transactions out what you do not want to call with the given state (logout when you are not logged in)
    • next state: where you can change your state (it only exists in one tool, in the other you have to put it in the transaction or in the post condition)
  • and the tool gives you other features to make the execution better (on this level, they are really different)
So the bricks are simple. A lot depends on you, and how you implement this thing. And here the KISS principle is critical.

And there is another thing about model based testing. As a tester we want to think on end-to-end level. We use the requirements to create tests, we try to put ourselves in the user's/customer's mind, we want to discover those cases when something can go wrong. And it is hard to align these things with model based testing...

Why?

Sometimes it is just too much for a model, you must keep it simple!

So I can imagine two strategies to create the model:
  1. Find that area in you application which is the most critical, but relatively small. And apply an extensive model for that (so you have to reduce your scope, to not have a large model).
    1. Benefit? your model won't be complex, and your critical part will be covered. And do not forget, you can have multiple models for different aspects/parts...
  2. Step back, and view the whole system, and collect those interactions what your users will normally do with it, and create a model based on that. So basically you connect the happy paths together, to see if with a long, diverse usage, your application is still flawless. 
    1. Benefit? your model is not complex, because your functional tests will focus on the edge cases, and other parts of the requirements, what is to expensive to cover here.
So that's it for today, in the next posts I will introduce the tools for you, and what I tried (and failed), and finally done (and doing) with it.

Wednesday 27 February 2013

Sonar and Erlang phase 2, version 0.x

I know, I know... the time is running so fast, but on the planet of endless rain it is really hard to measure the passage of time...

So when I have finished the first version of the plugin, and I sent it back to the troop who is responsible for Sonar, I realized they would not accept it... Meanwhile they created a tool called SSLR (SonarSource Language Recognizer), which is a framework to create lexers, parsers, and AST visitors for any language... So I have decided to create a new version, where I parse the source codes and build up an AST tree...

Chapter 1:
During that phase, it was around Sept., Oct., it took me a while to create the parser, because I am not really experienced in erlang, but with the tool you can easily follow TDD, so basically your job is:

  1. create a grammar implementation what you think is good
  2. find some erlang code (github droid is your friend)
  3. try to parse it
  4. fix your grammar
  5. repeat step 3.-4. until you can parse it
  6. refactor your grammar, to make it more beautiful, better
  7. check the file again
  8. continue with other file
So it is really not difficult. After a month I was able to create the first version, after that I had to create some rules, based on the needs of the developers in the team, also integrate it with sonar through a new sonar plugin, but in general (not counting a few hard days) it was straightforward.
So after I have done everything, I sent them an email: Hey guys, here is a language plugin with sslr for erlang, what do you think?
And I was right, they accepted it (after a few fixes).... https://github.com/SonarCommunity/sonar-erlang
And it has the version 0.x, you can get it from the upgrade center.

Chapter 2:
I forget to mention, but the previous thing was a lexerful parser, and during my implementation the troop created a better and faster, and more shiny way to parse source files, and it is a lexerless way... so a few months ago, I migrated all I did before to the new way, including the unit tests... and it is also released... 

Chapter 3:
so whats next? creating more checks for sure, and there is some dirty things in the grammar what I want to remove/refactor, but I did not have time yet. And it is not that easy. 
And there is a bug in the library sensor, so not all kind of urls are recognized, parsed, which pollutes the sonar db with useless junks in the dependencies...

And I should advertise it to other Regiments/Battalions/Platoons/whatever which use erlang, to gather more feedback, but it also means more responsibility...

That's all folks...