My other blog (in Hungarian) merhetetlen.blogspot.com

Wednesday 30 July 2014

Agile trip - [Scrum|Kan]ban

For me the solution is Kanban. Maybe it sounds scary first but you can have your useful scrum meetings (standup, retro, planning) if you like.

With kanban:
  • you do not need to change anything at the first day, you do not need anyone else to be changed around you to make things work without being frustrated. You can slowly and effectively change what is needed later.
  • you can visualize your actual workflow from the beginning to the end. Usually you do much more than to do, in progress, done. If you have 3 rounds of QA and 2 rounds of code review, and a slow and time consuming release process, you can create columns for them. From that very moment everyone (including you) will see how many states does a ticket need to go through before you can forget it. Because this is what matters, not only the to do-in progress-done...
  • you do not need to lie that anyone can pick up a task, you can define what column belongs to what part of the team (well it will mirrored in the WIP points anyway), so everyone can see how the load is balanced. If there are bunch of columns what belongs to one or two guy, you will immediately see it (more importantly show it to others)
I like the fact that you can make your current process visible, and you do not need to change it. You will not have problems with estimating tickets from QA or dev point of view, because it does not matter. You do not need to estimate anything because the time will show you what is your velocity (here it is the lead time), and that is your real velocity. that is the amount of time you need to push a feature to live. You will not need to convert the complexity in yours and other's head to time because it is based on time already.

If you worrying about the planning things, that can be solved easily as well. If the amount of tickets in the ToDo column is lower than X you can do a planning meeting and refill it. If only 5 ticket is ready because the project owner was on holiday you put that 5. It is not an issue at all, you just have to do the next planning earlier. If your team members cannot pick up any task (frontend, backend whatever devs) you do not need to solve it asap, or do weird personal planning meetings upfront. If someone is lack of tasks he can pick up something, or request a ticket from the PO, it is all dynamic. You can also create a column to collect tickets for specific releases, and you will always see how much testing effort is needed for that release. You can even define WIP on those columns to avoid big releases. If you have multiple teams working on the release you can use one board with swim-lanes and handle everything together. If you often forget the end-to-end test of a feature implemented by multiple teams in a period of months, just create a column for it...

if you have too much columns you can create more subset of the boards based on the roles in the teams. QA will see different columns than a dev, a project manager, etc.... 

You can do whatever is needed to make your life easier. The WIP will make sure to not be stuck, it will force you to move tickets forward, and the rest depends on you. You can adapt it to your workspace you will not depend on anything. 

Kanban is more like a tool than a process.

Monday 16 June 2014

Scrum - agile trip 2

Scrum and me... me and scrum... it is a complicated relationship...My last 7 years was nothing but "move to scrum" "be agile" "save the world" (where world===company).
And scrum is like the cake. It is a lie.

I remember the first time we met. Scrum was like a blond, slender, tall Japanese woman.... interesting but really confusing. Our colleague brought the idea to our office (call him Golden Broom, and our office: The Colony). He had a bunch of cards with different colours, he draw a board (ToDo, In progress, Done) and mentioned the well known meetings. One of the main developers instantly bought more colourful cards where we could wrote our stories, and bugs, and whatevers. It started with one team, we successfully multiplied the amount of meeting hours by approximately 1000, but at least kept the production rate the same (its good and bad at the same time), and increased the frustration (or not... not sure about that). And Scrum, because we thought that is our light saber or I mean live saver, were introduced in every team, one by one, voluntarily (actually it was really voluntary, we really thought that will change our life). But we slowly realized Scrum did not solved any of our issues... but it did change a few things. We told ourselves "at least now we/our job are more visible", what was true, but no one wanted to see us/it.

My relation with Scrum did not change. I have been through a few process migrations now, and I never had the feeling: "wow, this works, Scrum is great". Even if there are problems with the team members, with the company, with the aspect of planets, the main problem is with Scrum. It cannot work. Well, that's not true, it can, but only on paper and in a really small subset of reality. Well maybe it is just me, but I think the process out of the shelve should be able to handle reality without raping it.

What's wrong with Scrum?
1. Its whole concept is just not true. Usually the team does not and cannot have all resources+access+knowledge what they need for their work. So they will have dependencies, and Scrum cannot do anything with that (well you can add an extra column). And it will strongly and randomly influence your velocity (the ultimate value).
2. The team does not work without any connection with the rest of the company (even if the Scrum master pipes all the requests). They occasionally have to help to other people, solve issues, and, BLASPHEMY!!! they have to fix bugs, do releases, because:
3. Scrum just gently forgets about the rest of the lifecycle. Staging testing, releasing, firefighting, hotfixes, service releases... "minor" things what randomly and not randomly influence your velocity again.
4. Scrum has no idea what to do with a non-homogeneous team, like DBA, QA, Dev (frontend, backend). They will be loaded differently during the implementation on an epic, and you cannot plan and utilize them easily (without moving them here and there, or ask them to do other things what is not their profession). But the biggest problem is with the QA-Dev opposition. QA and dev does different things. They cannot be each other substitutes in most cases. What a dev can pick up, a QA cannot, and vice versa.  QA is loaded at the end, they usually have to do testing outside of the sprint (staging, live release testing, etc)... So you will have sprints where the devs are not loaded while the QA is, or the devs could not finish their stories, so the QA could only do a little testing, and they were doing nothing or days (well, you know what I mean). Of course they can always help each-other out (=dev should do more testing) but in reality it does not work. Especially not for longer term.
5. list all your issues here what you get from the fact that the rest of the company need to be able to support your scrum ("process is for the people" not other way around, right?), they need to give you good product owners, the business needs to change their thinking, etc...

So at the end your velocity will be a joke. The number what you can deliver for sure will be much lower compared to the number what you can almost deliver. And this unpredictability will slowly degrade the commitment in the team.

I know, Scrum is not the silver bullet, it is not for any team, but as I see it is only for a team which is in a startup company, with almost every people in it, and they do not have too much users, or they are not on live, so they do not have to deal with bugs, super-important customers, or at least they do not have serious releases process. Where serious means someone has to do something for more than a day. So indeed it is not for all team, but for <0.001%.

And the real problem, that Scrum does help a little. If your organization was a chaos before, without traceability and visibility and predictability (well you wont have it here as well), then introducing Scrum can give you the false hope, that the improvements what you have achieved in the first weeks/months will stay, and all your issues/difficulties will evaporate. But in reality, you will always fail the sprints, so you will plan less and less till it will be ridiculous so you start to increase again, because there is no other way to ease the cognitive dissonance what you have (why we cannot do more if we can do more?). And everyone silently accepts the fact that the sprints are never done. You will often have sprint goals like: "complete the sprint", because the tasks are independent not coherent enough so actually the team is not sprinting to one direction.

At the end Scrum will be nothing else, then a way of organizing meetings and doing estimates, and managing tasks.

Solution comes in the next post.

Sunday 15 June 2014

Agile trip 1

The Agile manifesto - for someone in my age - is nothing but a collection of common sense. It is addressing issues what were issues before I was cloned... oh well lot before I started to work in this industry. I have never met anyone who thinks processes and tools can tell us how to do our jobs. I have never met with a product owner who wanted to see and read the documentation and ignored the software, or ever said: "The feature is awesome, straightforward, but unfortunately not every button is documented....". And even if the customer collaboration was far from perfect, no one ever followed the contract line by line, even if sometimes we should have done it for our own good. But I still often see we do not want to adapt to change. As testers we love to create plans (I don't) and follow them, and ask for detailed requirements what already contains everything so we do not need to use our brains anymore during our job, only if we want to for some reason. And as developers who are so into the details implementation or under the spell of a cool/new/rare/fancy tool/library so they want to use it for sure even if it is just does not make sense anymore. Or doing technology migrations, re-factoring/optimization/etc  for their own fun without adding any value to anyone, and keep doing that even if the whole world change meanwhile. Change is somewhat against our default mindset. We love change, but only planned ones. We love change but only those ones what we start. We hate continuously changing focus, re-planning our tasks, dropping things we love in favor of things we do not know. It seems we forget we are not raising children here.... we create software what can maybe live for weeks even if that piece of code is the best thing we created in our life. We are creating short living phantoms; not statues for eternity.  It is like a shoemaker who either working on one shoe during his life, or does not want to sell any of his creations...

Saturday 14 June 2014

Open space projects - do it yourself

Open space source projects.

I spent a few days with configuring Jenkins JClouds plugin and Openstack. My original goal was/is to make it possible to create our complex (>3--6 machines) system for the smoke test what I plan to run after each commit. Currently I am extremely far from that, I do not even think it is possible in a clean and maintainable way, but what I wanted to achieve in the first phase of this project is to have the magical on-demand jenkins slave feature in our currently sandbox-only Jenkins. I really really thought it will be easy.....

Plug an play did not happen.... it can be because our openstack configuration, can be because JClouds and its Jenkins plugin seems to be mostly used and tested/created/whatever for Amazon Cloud, but it does not work without some changes. I can not really recall all issues but here are some:
1. userdata is optional.... WRONG it will die with a nullpointer exception
2. create jenkins user did not work, we had to create it in the image, it just died somewhere after the init silently....
3. it needs the instance name but it wont brother itself to use the hostname or name or id fields in the metadata it only expects it in the tags/name.... of course it is not there by default for openstack
4. if you add it to all responses (well its python right? eeeeeeeasy) the RuninstanceResponseHandler will die while the DescribeInstancesResponseHandler (I think that was he class name) will work.... basically they handling the same kind of response.
5. you must like floating ips because it will use that even if you do not want to
6. and debugging a remote jenkins master is just pain.... random ports... guesses... weirdness

But is there an issue? I mean maybe we are just too lazy and forget the fact that we can be proactive. Even if I personally do not have the knowledge to understand and fix a tool (well, actually it is not true, I am fking awesome), our company, our comrades, our friends, our family, our kids.... well mostly the company where we work has to acknowledge and allow and support tweaking/fixing/breaking the tools we use. Support it with time or with people.

There is no such a thing as free tool as we all know, but really often we expect from the reality to give us everything cheap+instant, because these tools are so internal and hidden that there is no way to explain to the chief business guru master marketing demigod, or costumer we need some time (where some is greater than the amount of hours what you can work overtime without notice) to fix/do something with them... We lie too much? We does not complain too much? We are just extremely lucky we could make all those releases? Does not really matter which one, the fact is there, we have to spend visible time on them. We cannot just recoil when we found an issue. At the end we have to forge the Death Star and if the hammer is broken we have to find a way to fix it. We have to be honest, but even if no one cares, we have to keep on at Darth Vader if necessary till the problem is solved or we are dead.

So at the end the first phase was done, Jenkins slave was started, we had to spend some time on debugging, some other on finding solutions, some more on applying solutions, some more on more debugging, some more on drinking more coffee, but at the end it will work (or not), and we learnt something (or not). And we can tell to our boss, and the costumer and the sergeant: software development is not magic... we are just plain old craftsmen

Sunday 18 May 2014

Automation

I really love the autobots, the automatron, the automobile, the automauto, basically everything which starts with auto and end with suffer and pain....

Automation.... our silver bullet, life saver, job saver, mind saver.... I have made and work with a few automation framework (internal), and one thing always happens if you are not paying attention to it... it will be a heavy, massive piece of sh*t... why? you are a test engineer, you are not creating unit tests in these frameworks, so it is pretty sure you will have to implement some business logic (simplified form), you will execute complex, more complex scenarios, because your mind is critical you will do negative, unhappy, sad, destructive tests. You are working in an "agile" environment, trying to follow the code generators (alias developers) support the release, act as an internal customer support (and all these things are fun). You want to deliver the "tests" asap, they want you to deliver the "tests" asap. You try to avoid headaches so you wont revisit any story only if you have to, so you try to be fast, fast, fast....

And meanwhile everyone forgets your/our automation framework is an application as well. It needs maintaining, it must be structured, no code duplication, easy to learn, easy change, but most likely some of them will be true for you, who created it, but not for anyone else...

You end up with a regression test suite running for 4 weeks, because it was not designed to run it in parallel, or to have any kind of run optimization. But to add it you have to change everything. The code is completely unbalanced, some parts are good, others are so crappy you just do not want to open it. And the tests are te same.... some using an old part of the framework, in the others you tried out something else, the latest are working in a different way. It has so many tricks (workarounds) even yourself start to lose track, but you cannot do anything, because the code generators still generating code, you want to follow them...

Solution:
make your framework, your tests from the very first time as they are mission critical applications. Because they are. You cannot be negligent, the execution and the report coming out of your tests will be the most important thing for your team (if not, do not do it at all).

Tuesday 29 April 2014

Sonar cSs plUgin

I know, I know... no post for a year and 2 today... the mini-mes (aka young clones) are not here, I have plenty of time...

It is just pure self-praise but I just wanted to tell you that the sonar-css-plugin is now available (not officially released yet) in the sonar forge. So if you have CSS files under your pillow, now you can parse it with its awesome AST parser, and check against brilliant, super-intelligent, cosmological, incomperable (https://github.com/CSSLint/csslint) rule set.

You can find it here: https://github.com/SonarCommunity/sonar-css

I am open to suggestions, ideas, requirements, pull requests, donations, followers, worshipers


Un-unit-test-ablility

I truly and deeply regret my long silence. Because of various reasons I had to move to another battleship. Slightly smaller but we can react faster to any action of the Rebellion. But it is not that important. What important is, I have a new codebase to work with....

it is in Java, and it is completely up to date with the trends. Which is something really new to me. The last time I was involved in a java upgrade for a project, was from 1.4 to 1.6 when the 1.7 was about to coming out. It is using a lot of java 8 features (lambdas, Consumers, soon Nashorn,  etc) it is a fully concurrent, event based, scalable thing (not sure what else, but I can recall these words)....

so (or not so, just and)...


which is nice, on one hand, it looks funny, especially after erlang, and some javascript. I also created my first lambda expression and I felt a strange prickling over my scalp and over my arms, it was not a bad feeling... But I am a tester, (at least my superior shouts it all the time) and the project lacks unit tests, I decided to lets do some unit testing, and show them (=devs) it is so easy to do:

1. there was a itsy-bitsy feature, what was about getting some number from another system, store it during the session, and when it ends dump it to somewhere. I thought, if something is unit testable than thats it. we can check what happens if the other system sends a negative value, or a 0 (what should not happen) or a number what is less than the previous (should not happen as well)... its easy. I prepared myself that I have to use mockito (not mochito, but at the end I should have used that) to mock out the other system, and everything I do not really need,  and I can show them, how to do it.
well...
the whole system is concurrent (whatever it means) and asynchronous, so the class which had that (private) method I wanted to test had lambdas all over the place (and inner classes). It was passing complex classes, the Runnables were triggered from various places, and I just get lost in it... just to test that method I either had to mock half of Corellia, or had to create instances and start up almost everything. So I just said: There is no way I gonna unit test this....

2. but I am stubborn so I decided to give another try for unit testing in this app. There was a "unit" test, what was testing a small class which was parsing some buffer/stream/whatever coming from the network. I was like: oh yeah, here I can truly mock the shit away, simulate the stream what this class gets, from its field, and remove the dependency of an external running system during this test (which is an issue with this test). I tried to not mock the whole world around it, just what I need, but I failed again... mocking the class what stores the buffer was not enough because I had to call a method in my class to read the buffer from the field, but it also used some other fields from the mocked class which was not initialized because it was just mocked... but if I had not mocked it I could not changed the data it holds....

so at the end I realized, with my knowledge (I am somewhat sure this is the main issue), in reasonable amount of time it is impossible to unit test anything in this application (well thats not true, but at least find something new what is not unit tested already). All these Runnables lying everywhere and triggered from various places when a given event happened somewhere, I felt like I wanted to directly test an anonymous function deeply nested into other anonymous functions in a javascript code.... just no way....

so is it me or it is the world?

Saturday 18 May 2013

Abstracting abstract abstractions

Even on Dagobah there is no such a thing: enough rain.

Abstractions are good. We can simplify things based on the way how we want to use it. The last time when I was on a QA conference on our one and only Death Star in the Nether sector, there were a whole presentation about how can we create a good framework with creating layers of abstractions.

Lets take an example:
    You have to test a backend user management system, which has a REST API(or SOAP, or json over http or basically whatever). When you have to test it you will create an abstraction for sure which maps your tests to actual http calls somehow. and hopefully you won't make http calls from your test methods every time... This is layer 1.

    After a while, when you get bored to set up the same object over and over again when you want to log in with a test user, or register, or do something you will decide to create another abstraction with util methods for those commonly used actions or group of actions you have to perform over and over again. This is layer 2.

   After a few months or after adding much more tests or after covering many many new features you realize even with these util methods sometimes it is painful (in terms of lines of code, or in terms of readability / maintainability) to set up complex preconditions. So you decide you need another layer. But it is not obvious what.

    Maybe you already have a class file which represents your test users. So it can hold basic or not basic info like: first name, last name, dob, email, password, number of dogs, IQ etc... And maybe you will start to wonder: if I have a list (or something like a list) of attributes what a user can have, why do not I have a list of actions what a user can perform, or can take. You can have things like: login, register, send message to someone, get banned, fly to the moon, get fired, get attacked by Jedis etc...
    One of the best way to list things in java is to create an enum. Another benefit is, you can define an abstract method, what every enum must implement, something like: doAction(TestUser). So in you enum thing, you can list your actions, and implement the required api calls to achieve that action (use things from previous layer). And in your test user object you can do a method like: doActions(MyOwnActions... action) and you can list those things what you want to perform with your test user. This is layer 3. and it is really handy:
user.doActions(REGISTER, LOGIN, GO_TO_MOON, KILL_A_JEDI, EAT_BAKLAVA);

    So we have 3 layers for a simple api testing which abstracts the http calls to a level where you can just list what you want to do and your test user will be in that state. But what happens if someone testing the GUI for example needs your test user thingy. You can tell him to do what you did, so before the tests create a test user instance and set it up with the required actions and so on.
It means they have to change or have the same kind of things on multiple place (of course depends on, maybe you have to keep your test class hierarchy flat, and you do not need users in every test). So maybe in that framework it can be ugly to set up the test users everywhere, and you want to provide a nice way to just get what you want.
    You already have a static abstraction about the actions what your test users can perform, so you can easily create dependency injections.
    Well I know that this example is not that common, but in my case we cannot just inject the dependency with guice or something... here, in the tests the test user depends on runtime parameters, and you need a given and often various state of the user.
    What can you do? You can create an annotation like: @TUser(actions={Action1, Action3}) and if you use testng you can override the IMethodInterceptor so you can create a test user into the annotated field based on some parameters (from the test instance, or runtime things) you decide to use. So you have abstracted the test user initialization, and this is layer 4.

...well, when I implemented this I thought there are at least 8 level of abstractions .. it's only 4, but the message is clear, if things are getting complicated and you have lot of duplication do not afraid to take a step back, look it from a new point of view, and put those things what you need to another layer.

...sorry for the half broken language, my brain is half off....
 

Gungans are everywhere

Maybe it is just me, but I often get annoyed by the dozens of gungans who appeared around me.... they are everywhere...

I am on a user's and dev's list of a testing tool and each and every day I see a post from a gungan... it is a safe bet that the post has been sent from a gungan when:

  • he sends his question to both list, he did not bother to read the purpose of the mailing lists, of course when someone tells him, not to do so, he appologies and promise to not do it again (it does not really matter, because someone else will do the some next day)
  • his question is
    • pointless, meaningless, I mean I am not an expert of the high galactic language but at least I try to describe my problem as simple and detailed as possible (true, not often perfect), but here you have to call a protocol droid to understand the message.
    • extremely generic, I mean so generic at the first time you think it is a joke
    • completely unrelated to the tool, so you guess he was just using that tool in a different window when that problem occurred in a completely different application in another window/screen/battle ship
    • unrelated to the tool, so the problem is with something what this tool is using and it is clear from the error message, or whatever
  • his email is
    • without details... so basically you have no idea what he is talking about, but he experienced something with that tool
    • has a lot of details... so you get the description of the problem, the log files, the content of his HDD, his mail history, all together is bigger than 1 petabyte.
    • has enough details but copy pasted to the mail with broken format, so no one can actually understand it (and sometimes the last two are combined)
  • his problem is
    • easily solvable, so if he read the error message, he could see (hopefully) what is the problem and how to solve it, because it explicitly tells him where is the problem, what to do, it is even gives a nice recipe for his date tomorrow....
    • already answered 10 billion times
    • can be solved after 20 second of galactic network searching...
  • his emails
    • are coming sequentially
      • in every half a day about the same problem, in the same thread 
      • in every day about the same problem in different thread
      • in every minutes about different "problems" in different threads (ok I know, at least it is a proper problem/thread usage)
I have asked a lot of stupid questions as well (and I will), but I always tried to find my answers before, I even tried to debug the application, sacrifice a wookie just to not ask, before I did so....

... and only a gungan can call himself in public as something like: Emperor certified jedi slayer professional after 2 mission happened on a deserted planet hunting scorpions...

Thursday 25 April 2013

Flashbacks

    It is always good if you station in a peaceful quadrant of the galaxy. No annoying battles with the Rebels, you do not need to wear your white helmet with you cannot breath... but sometimes even here, you have to jump back to the front line and fight.

    So what's happened? I had to test web GUI again after 1 year of peaceful API testing. It was not a usual test, it was more simple, I just had to load a few page a few times... you cannot wish an easier task to do...

Good old problems I have faced with:
  • a page just does not work in HtmlUnit, there is a meaningless JS exception in the log and it is impossible to figure out what went wrong and where and why. And of course everything works perfectly in a real browser (=slow, slow, slow)
  • So you have to use a real browser, but:
    • you cannot set any cookie you want. Why? Because not! Details? An average user could not do it as well... thanks! This is not an average user emulator, but a testing framework, please! That's the reason why you cannot get response status codes, delete the cache (not sure...) and other useless, unwanted features
    • you cannot use the domain and the expiration date even if you are on that page. It only works with name=value... no comment
    • Chrome hangs on loading page A Firefox will hang for sure on page B... so you cannot use any of them without hacking...
    • You set the pageloadtimeout and realize it is not supported by Chrome. No it is not important at all... the default 2 eons is absolutely fine. But do not worry, you can use a plugin in Chrome which does the same, but:
      • you cannot get the crx file.... 
        • well you can get it but it will take you a while, it is not easy to find at all, and you have to be fast and spend a lot of time to figure out how can you get it
      • finally you get it and realize --load-extension does not like it, so at the end the folder what was always there is enough... no worries I am paid by time...
So it is fun.... pure fun. But anyway, what I can suggest to you is never test a well-written site in HtmlUnit, using all the features of it, like listeners and the Emperor  knows what else (it was long ago, in a faraway....), and you would never think that should be a standard for a web test framework.... 

...but for the Force sake, if your test framework does not come with AI and it is not enough to show the site to it and tell a few things what it should listen on, do not block our access to the internals. This is our profession we know what we want to do and why... at least me and my imaginary friends

Tuesday 19 March 2013

Model based testing - Osmo

Let's start with tool one: Osmo

Osmo is a java based tool, I started to use it to have more confidence in my tests with connecting them with each other. 
You can download it from here: https://code.google.com/p/osmo/

It is really simple tool. It gives you a few annotations: @Transition, @Guard, @Post, @Pre, etc what you can use to annotate the corresponding methods, and it gives you basic way to generate test scenarios  like completely random, balanced random (so the number of the execution of each transition will be similar).

You can define end conditions, when to finish a test, or a suite (time based, coverage based). And you can easily create your own. It also has other features where you can bind transitions with application features. I won't go into details, only if you are interested in. I will tell you what I tried to achieve with it, and at the end what I decided I will do with it.

First I tried to use it in my API test (it is JSON over HTTP API). Because I was aware of the problem of a too complex model, I decided to cover only the happy paths. So I created a model for each interfaces, I could easily implement the transitions (login, logout, register, create event, etc) and I could use my previously created test user object to track the state. So it was pretty easy, but at the end meaningless... 

why? 

Mostly because the API was so simple and well covered with functional tests, that it was really unlikely that with a slow limited model I could discover anything... Maybe I was wrong, but I already knew there is another tool what I can also use... so I have decided to try out that as well... but it is a different story.

But I really liked the ease of use of OSMO, and I saw its power. So I started to look for a better place for it.

We have another test framework for web GUI. It is based on selenium, and it use jbehave. (maybe I will create another post about how can you integrate another test tool to jbehave). So it is a high level thing, with page object models.

And it was promising... I just closed my eyes, and imagined a test which goes around the whole site randomly, and creates several GUI actions, and continuously checks the states.... and it was a good image. 
Maybe I am alone, but I often faced with issues occur close to but above of the functional tests, like: 
  • after the 2nd page of the registration, if you click back, and next, and back and next you will lose your session 
  • if you open page A and you go to page B and there you log in, and you click on the promotion, and click back, and log out, and open page A again, your session is still there, or you will get a javascript exception, or whatever...
  • you can image more. Common thing: the function works, but the (well, to be honest CRAZY) combination of them does not.
So here, I do not need to be fast, what I want to do is just do whatever I can on the page, and see what happens... So I created a prototype for this, what later we can extend, and basically add another layer of testing. Previously we had tests for pages, and now we can have tests for the whole sites.

And what else can you do with such a model? You can use it for monitoring your application. If you balance the transitions, and make an endless run it will continuously monitor everything (what is in the model) for you. And with the balancing, you can put more focus on the critical components, and less on those things what you do not really care. 

And if your things are in Java, it is really easy to do.

Next time I will tell you about the other model based testing tool....

Friday 1 March 2013

Model based testing - the future?

So there is this thing: model based testing... sometimes they call it property based testing. The concept is simple, instead of writing test cases (which can be really, really inefficient) you just tell to a droid what to do. Of course it is not that simple... the droids nowadays requires a lot of explanations...

In nutshell it means the following: you create a finite state machine which is the expected behavior of you application, and based on that, you can generate tests.
The edge describe the action what you need to do with the system under test, and the node is the expected state, so the content of your assertion. Sounds simple...

My previous experiences with model based testing were...
  • If there is any tool support for visualizing and executing, storing, creating, whatever, it is really expensive, and it tells you the way how you can interact with the system....  you are strongly binded to the tool.
  • After a while, when you try to cover all your requirements with the model you end up with creating a huge, unmaintainable monster, what is more difficult to understand than the implemented system, and the hair stand up on the back of your neck if you just think that you have to change it
  • so you decide to throw it away and implement tests based on your requirements and your inspiration, with a few comments, so everyone can understand what is the feature/method about...
But what do we have now?

In the last few months I started to use two tools... well I am not extremely experienced in any of them, but at least I am familiar with the features it can give to a QA engineer. Both tools are similar in a few points:
  • you use a programming language to implement your model based test
    • so you have to do your own FSM implementation
  • the tool provides you the bricks what you can use to build up your model:
    • a transaction/action where you call a method to change the state of the system under test (aka test call)
    • a post condition where you can check is the state of your application is what you expect (after the log in I have a proper session)
    • a precondition/guard where you can filter those transactions out what you do not want to call with the given state (logout when you are not logged in)
    • next state: where you can change your state (it only exists in one tool, in the other you have to put it in the transaction or in the post condition)
  • and the tool gives you other features to make the execution better (on this level, they are really different)
So the bricks are simple. A lot depends on you, and how you implement this thing. And here the KISS principle is critical.

And there is another thing about model based testing. As a tester we want to think on end-to-end level. We use the requirements to create tests, we try to put ourselves in the user's/customer's mind, we want to discover those cases when something can go wrong. And it is hard to align these things with model based testing...

Why?

Sometimes it is just too much for a model, you must keep it simple!

So I can imagine two strategies to create the model:
  1. Find that area in you application which is the most critical, but relatively small. And apply an extensive model for that (so you have to reduce your scope, to not have a large model).
    1. Benefit? your model won't be complex, and your critical part will be covered. And do not forget, you can have multiple models for different aspects/parts...
  2. Step back, and view the whole system, and collect those interactions what your users will normally do with it, and create a model based on that. So basically you connect the happy paths together, to see if with a long, diverse usage, your application is still flawless. 
    1. Benefit? your model is not complex, because your functional tests will focus on the edge cases, and other parts of the requirements, what is to expensive to cover here.
So that's it for today, in the next posts I will introduce the tools for you, and what I tried (and failed), and finally done (and doing) with it.