Wednesday, November 13, 2013

EuroSTAR 2013 - A developer's first test conference

This time of year normally means going to Oredev for me. Being a developer, a conference for developers would seem as the natural choice for me. But this year I took a leap of faith. I decided to attend EuroSTAR instead, a software testing conference in Gothenburg. My interest in quality and testing made me curious on if I could find alternative perspectives on the subject by seeing it from a tester’s point of view.

The first impression, after a glance of the content was that there is one big difference for me. A superficial one, but a difference non the less. At a developer conference, there are always a few “rock-stars” from the community. In the testing world, I am not as familiar with the "celebrities", so the possibility of becoming star-struck seemed slim. That in itself doesn't mean that the content will be equally great, but it always adds to the experience to see idols such as Scott Hanselman, Dan North or Gojko Adzic in real life. All in all, the testing community is as new to me as testing conferences. 

I went in to the conference with mixed emotions. One part was very excited to be in an environment where everyone is as excited about software quality as I am, the other part nervous that not choosing a developer conference was a bad idea. 

But enough background, on to my experiences in the actual sessions.

Keynote 1: “Skeptical self-defense for the serious tester” – Laurent Bossavit

The keynote began with some administrative information about rules and practices for the conference. EuroSTAR is using an interactive scheme for sessions, with Q & A as an essential part of the talks. My previous conferences have also had Q & A in the session, but often only if time would permit it. And since many speakers run over the assigned time – discussions at the end are mostly limited.

Another new thing is how the Q & A part is facilitated. All attendees are given three colored sheets of paper and a unique number. If I want to ask a question on a new subject, I raise the green card. If I want to continue on the current topic, I use the yellow card. If I have some super-urgent stuff to say, I use my red card. At first, this seemed overly ambitious. We are after all in Sweden, the country of people that will avoid speaking in public like the plague. But I was surprised to see so many people falling into this technique so rapidly and it was facilitated in a very professional way. However, it seems like the attendees are very international and most questions came from the non-Swedish part of the crowd, so my plague theory might still be valid… The key thing though was that it worked a lot better than I expected.

The keynote itself was about questioning facts. There have been a lot of statements over the years that have been accepted as an absolute truth, such as the cost increase of finding bugs late in the development process. Laurent encouraged us to question such statements and seek the actual facts behind these claims. To be skeptics. I think he got his message across, and it is a valid message, but it was a little long-winded and the talk was a bit slow in pace for my taste. All respect to Laurent, who really has put a lot of effort into his research and obviously knows alot about it. One funny coincidence – when I came home and watched some children’s TV with my kids, there was a show called “Is it true?”. This show made about the same statements about being skeptics and it kind of carried the same message. In 15 minutes, on a kids level. Maybe it was the fact that he had the “graveyard shift” after lunch that affected my experience. Damn you Chicken tikka masala J

“Experiences of test automation at Spotify” – Kristan Karl

I saw this talk before, at MeetUI in the spring, but this version was longer at delved into the team structure at Spotify among other things. The setup at Spotify is really impressive, both from an organizational and a test automation standpoint. Karl described the really cool automated tests that ensure Spotify quality on multiple platforms. The tooling used was all open-source, to be scalable economically. He also described the notion of "guilds" at Spotify, cross-team interest groups that exchanged experiences in a common field of interest, such as automation or deployment. One thing that is perfectly clear is that Spotify is a product that everyone knows and are interested in. The session was packed beyond its limit and a swarm of people hovered around Karl afterwards. A great talk all in all!

"Questioning acceptance tests" - Adrian Rapan 

Given the topic of this talk, I was expecting to see something about acceptance tests which for me means tests that face the business. But this session took a completely different path. Adrian described how they had come up with automated tests for calculations of a trading application. It was a pretty cool thing they had invented, even if I am still not sure where to place these tests. They were, as Adrian admitted, not readable by business people. He described how they had used SPOCK, QuickCheck and property based testing to generated hundreds of tests for financial trading business rules. Interesting stuff, but not as much acceptance tests as I expected. As a developer, I like code and cool technical stuff but I am not sure how many testers followed on what Adrian was talking about. I am not even sure I did... :)


Keynote 2: "Testing Machines As Social Prostheses" - Robert Evans

How are computers and humans different? Humans an understand a social context, a computer can not. Even if we throw more and more rules at a computer, it will not be able to mimic human behavior completely. This is what Robert contented, and he made a very convincing argument. He used the example of a spell checker, that can only use a dictionary to make sense of language. A human can understand that a spelling error might be okay, given a certain context. Robert also talked about how we as humans can accept flaws in computer behavior compared to the corresponding human version. We "fill in the gaps" so that we can interact effectively. An automatic check-out station at the grocery store is not a human cashier, but we're willing to accept that and adapt to the new behavior. An intriguing talk, in a fast and steady pace by someone that knew what he was talking about. Cool stuff, although I'm not at all sure how I'll put it to use. One takeaway is that I am no longer expecting any human-like robots any time soon...

Keynote 3: "Creating Dissonance: Overcoming Organizational Bias Toward Software Testing" - Keith Clain

This was the best talk of the conference. Keith is an awesome presenter and he really delivered. He spoke about the bias that testers and testing encounters in the business. Tests costs too much, take too much time, are useless etc. He described approaches to take to fight the biases and how to start making a difference. What I took away from his advice was
  • We should become field experts, read up!
  • We should strive to win over everyone, not only company leaders. If we only convince the CEO and he/she is replaced, no change will stand. 
  • Don't settle for mediocrity
  • We will fail. And fail again. Persistence is key!

"Specification By Example Using GUI Tests - How Could That Work?" - Geoff & Emily Bache

This was a talk I was really looking forward to, as it describes something I have been struggling with a lot. Testing user interfaces is hard, combining it with specification-by-example is even harder. First of all, Emily made a really good introduction that went through the challenges of GUI testing and Specification-By-Example in a rapid but clear way. Then Geoff made a demo of the tool they used to tie the specifications to the GUI, TextTest. This was a completely wild approach! Every component in the GUI is rendered in ASCII and the expectations of the component is expressed just that way - in ASCII art. Another cool thing was how they used record-playback to read what happend in the GUI as someone was clicking around. After the recording was completed, you could give each specific action a domain specific name. For instance, selecting a row in a table of animals became "Select animal". It then automatically understood the data in the table, creating reusable steps such as "Select animal Horse" or "Select animal Dog". Once actions are described and recorded you can play them back later to see that the rendered ASCII art matched the first run.

On the positive side I think that examining the whole screen, as opposed to checking single components might find more bugs that traditional record-playback approaches. The magic of finding actions also seemed cool. But it seems that it puts a lot of constraints on the UI framework used. TextTest only supports some frameworks, such as PyGTK. Also, describing it as Specification-By-Example seems a bit false. The notion of assertions is completely gone, making the tests not that readable to business people. As it turned out, the tests they had created were not yet used "so much" by busniess people. Having a customer read ASCII art, well... I am not sure. But it is a fresh approach that's for sure. 

With Cloud Computing, Who Needs Performance Testing - Albert Witteveen

To be honest, performance testing is not for me. Whenever the subject is brought up, I run away. It seems to be that performance testing is a skill, a profession of its own. I am more of a functional testing guy. Having said that, I didn't have the best reasons to attend this session. But this time slot was thin for me, and this was what I ended up with. Albert is the kind of guy I would like to have a look at my systems, because he really seems to have this stuff down. He spoke about queuing theory being the foundation of understanding performance and how computers behave. Finding where stuff queues up is the key to finding bottlenecks. He spoke about how performance testing has changed due to the existence of cloud-based solutions. But when asked on how to find bottlenecks, he pretty much answered what I thought: that it is hard and required a lot of skill and experience. I am glad that there are people like Albert how love this stuff, because I definitely don't.


Automation: Time to Change Our Models - Iain McCowatt

Oh, man. This talk might have been better than I first gave it credit for. I am an avid defender of automated tests and pretty much believe in everything that Ian threw out as not that valuable. He spoke of how we use automation when testing. He contends that automation could serve a purpose as an instrumnent to give us knowledge that drives our testing forward. We should use automation tools to help us dig into the system and get us data. Once that data is there, testers should examine it and use it as input to make qualified decisions on if it is valid or not. Iain is a confident guy. He makes his case very clearly but throwing away automated tests as not valuable doesn't resonate with me. Maybe I need to change my mental model, but I am not exactly sure what I am replacing it with. Some of the questions afterwards touched on this as some people wondered if it's not a matter of expanding the model we have with this new one. I believe that is very true. Of course we shouldn't blindly use automation instead of the skill of professional testers. The testers should put automation to use. But for regression tests, I believe that an automated set is the best way to go. Especially if you have a system that is changing a lot in central parts. Exploratory testing aided by tools in combination with a decent automated regression suite is the way to go if you ask me.


Cross Team Testing – Managing Bias - Johan Ã…tting

This session was about how Johan and his teams helped each other out with testing. With every autonomous Scrum team having their own testers, sometimes testers got to comfortable and adapted their testing to how the software was built. They knew too much, basically. They introduced a recurring event where testers would team up and test part of the software where they had not been part of the building process. This resulted in better quality and better software. A pair of unbiased eyes is valuable. Johan builds software for the health-care industry where software errors kill people. Something to think about...


Agile Quality for the Risk-Averse - David Evans & Tim Wright

This talk was basically about how they adopted agile and how they managed prioritization and risk. A lot of models, guidance, boxes and arrows. Their teams had no testers, everyone is doing tests. This is something I believe is a good way to go. It makes everyone accountable for testing and prevents stuff from being thrown over fences. A part from that, this session had poor timing. My mail box was going berserk and my focus was on other software quality issues, closer to where I work. So, I came away with very little from this one. Sorry about that...


Moving To Weekly Releases - Rob Lambert

This was a great one. Rob talked about how their company had gone from 9 month big-bang releases to releases every week. Basically by adopting agile with everything that comes with it. Take aways:
  • Make features togglable, so that features can be turned off if they don't work.
  • Put testing in the center of your process, not testers
  • Delivering often makes it less dramatic and gives more frequent rewards
  • Use a pre-release where selected users (in their case, their own company) can play around with the upcoming release before other users.
I have long dreamed of the situation that VoiceMedia obviously is in. A place where releases are frequent and quality and testing is everyones business. That to me is what agile is all about.


Summing up

In all fairness, I must say that the sessions all seem a bit better now with some hindsight. But I can't escape the feeling of disappointment. I wanted to go to a conference where I had problems selecting which session to attend, since they all seemed so interesting. Instead I found myself trying to find something at least a little promising. Maybe it was my own fault, taking this leap of faith. Maybe I am not the target audience. I think it is sad that we in the software business still are talking about the same things as being "new". Collaboration between developers, business people and testers has been part of agile for a long time. Still it is presented at conferences as new and inspiring things. Why haven't we gotten further down that path? Also, I would have thought that "Europe's premiere software testing event" would have been at a higher level. I wanted to see sessions that presented completely new stuff, cutting-edge stuff, new tools, new practices. If I had never heard about agile, continuous delivery or test automation before I would have learned a lot! So for junior people, this might have been perfect. The sessions mostly covered the same ground in basically the same way I have seen before. Maybe it was because I selected sessions in my comfort zone and area of interest. 

When a conference is at its best, you leave sessions in a "high". I remember hearing Dan North speak at Oredev one year and I was blown away. I have been to coding presentations that has left me urging to get to my computer and try stuff out. When Scott Hanselman spoke of "managing the flow" one year, it changed my way of working completely. I want those sessions, that feeling. But this conference had no such moments for me. Hopefully other attendees had such an experience, that's what conferences should be about.

But I must also point out that the facilitation of sessions and everything around it was really professional. They ran a tight ship with great success.

In conclusion, maybe testing conferences are for testers. Maybe developers should attend developer conferences. At least this developer.

Tuesday, September 24, 2013

BDD style reporting in SoapUI Pro - Part 1

Tests as documentation
Since I discovered how well BDD concepts fit into SoapUI, I have been trying to spread those ideas and get more people to write tests in the Given-When-Then syntax. It can be a big step but I believe it is worth it. Tests become so much more than tests, they become documentation and provide an easy way in for someone trying to understand what the system actually does.

Recently I have been doing a lot of exciting work in SoapUI Pro. I have created some custom test steps, guided by Ole Lensmar's excellent blog post, that made SoapUI more usable in more scenarios. I'll go in to that in another post, but for now I'll just say that SoapUI has now become my primary testing tool for many projects.

The point of documenting something about your system is to have someone else understand it. Even if I am definitely in the target audience (how well do you remember what you wrote last week...?) the main point is to make the documentation understandable by other stakeholders. That's where I think my idea of BDD in SoapUI started to crumble. I have sent several screen shots from SoapUI to customers, but that's not really a professional approach. I have also opened up SoapUI projects to explain, but in those cases the tool sort of comes in my way. Many stakeholders don't care about what tool I use. They just want the information, preferably as short and concise as possible.

Enter SoapUI reporting
Recently I discovered the reporting capabilities of SoapUI. The built-in reports are mostly concerned with displaying test results, performance and coverage. They are all good things, but what I was after was a report that could be used as documentation of the system, not a test result report. I am interested in red and green test cases, customers will assume that the system is working.

What I wanted was a specification that only contained my BDD features and scenarios with their given-when-then steps and some additional information. Plain and simple - I thought. I did have to battle both the SoapUI object model, my lacking Java skills and JasperReports, but I think I won. After two nights of battle, I have ended up with something that is not yet perfect, but shows some great promise!

An example
Lets say you have created a test suite for a feature in your system. I will use the "withdraw cash" example, from Dan Norths excellent intro to BDD, again and possibly exhaust it even further beyond its limit...


Basically, this project now contains one feature with two scenarios. One happy flow and one where the customer is rejected. Tests are written with the Given-When-Then syntax. And yes, they're all fake.

To be even more expressive, I have added a description to the test suite "Customer withdraws cash".


Now, I could give this screen-shot to the customer, but it would not be very...cool. It contains a lot of noise and may lead to discussions things like "why is there no Security Tests?" or "What do the symbols before test step names mean, why are some starred and some not...?". So, instead I'll open the project by double-clicking the "ReportingDemo" node, click the "Test suites" tab and then the "Create report" button:


Now, a list of available reports appears. Among them, my own "BDDProjectReport". How I got it there? Be patient...


If I select my report, and click ok - This is what happens:

Magic! Instead of a screen-shot, I can now hand this document to my customer. Away with all the noise, in with a clear and concise specification! The report actually iterates over all the test suites, test cases and test steps in my project. It adds some custom formatting, such as the bold given-when-then. It also finds the description of each suite and adds it to the report. It is actually dead simple. But that's what I think is so brilliant. This is exactly what I see in those SoapUI test suites, but for some stakeholders it is not that visible.
They don't need to know how I implemented the tests, just that I did and that they define the system that we agree on. And the best part is that this is not an external document, it's the actual tests - only formatted for the intended audience.

I think this is pretty great. How I did it?

Given that this blog post is already too long
When I consider adding the actual "how I did it" section
Then I'll put that in a blog post of its own
And put some pressure on myself to actually write it
And come off looking quite geeky writing this in Gherkin

Until then!

Tuesday, March 27, 2012

Towards an integration test data strategy

For me, integration tests are what really adds that extra feeling of accomplishment to a piece of software I deliver. Achieving a decent design and a bunch of unit tests also add to that feeling, but the integration test is that final touch (not that they are written last, but that's a different discussion...)

This post deals with system-level integration tests, where we test many components of the system in a deployed environment. We test the system like the user would, using a GUI, a web service or other interfaces. These tests should be portable to other environments so that we can use them as regression tests during the applications life cycle.

Cheating the data pain

For almost any integration test, data is something we have to consider. Our integration test commonly depends on some amount of data being setup prior to the test. It might be data that your code uses, valid parameters it needs or data it produces. Selecting and managing this data is often hard and has been a frequent pain-point for projects I have been part of.

So why is test data painful? Often the models our software are built on are complex, so understanding it requires hard work. It might be easy enough to understand it for one test case to work, but it is a completely different thing to gain a general understanding in order to create dozens of test cases. Another painful attribute is portability. You might own and know the development environment pretty well and you may have some "dummy data" setup, but what if you are testing in the UAT environment. Customers will have access and as we all know - they won't handle it gently...

So. Things are hard and painful. What happens? I have a few options, pick one...

  1. We skip it. Integration tests take to much time, are to expensive and have no value.
  2. We skip it. We have unit tests.
  3. We kind of skip it. We create tests only in our local environment, that will have to do!
  4. We think we don't skip it, but we really do. We create smaller, smoke-tests, in the environments outside of our control.
  5. We do it. We test all the environments, since we want our bases covered. We know that stuff happens and that any environment is a new one and that if we don't find the bugs - customers will.
Okay, that cheesy list might not be true or resemble any reality that you know - but for anything difficult we tend to cheat. We do it on different levels and for different reasons, but we do it. We cheat.

Enduring the data pain

Since I have cheated the data pain many times I wanted to explore how I could bring some order to this mess. That's what we developers do, we organize messy things into stuff that at least we can understand.

I think there are ways to cheat that actually don't impact the quality of your tests.
So, let's get to it. Basically,you have four approaches for any data in your tests.

1. Hard-coded

This is the "quick and dirty" approach. Here we assume that the same value will be available no matter what. Even if this may be true, it tends to be a quite naive approach. Moving from one environment to the other, data will change. But this approach is acceptable in certain cases:
  • When you are just trying to get something together for the first time
  • When you are creating throw-away-tests (Why would you? Even the simplest test adds value to you regression test suite!)
  • When data really IS that stable (Countries, Languages etc)

2. Find any

This approach is a bit more ambitious, but still requires low effort. Lets assume that you need to use a Country for some reason. Your environment is not setup for every single country in the world, nor are countries static - approach 1 is out of the question. For a database scenario, we'll create a simple "SELECT TOP 1 FROM xxx" query to retrieve the value to use. We don't care what country we get, as long as its valid. Only selecting the columns you need is a sound approach for many reasons, one is improved resilience against schema changes.

Note: My examples assumes that your data can only be retrieved from a database but, depending on the system, you might be able to collect data via web services, REST services etc. 

3. Find with predicate

Here's the more ambitious cousin of option 2, this time we make the same "SELECT TOP 1..." query, but we add some WHERE statements, since what exact entity we want is important. In the simplest scenario we might just want to make sure that the entity we use has not been "soft-deleted". Another example (sticking to the country-scenario) would be that we want a country that has defined states. Again, only query agains columns that you use. When these predicates become very advanced  and start to grow hair, consider this
  • Will the predicate always produce a match, is the data stable enough? In all environments?
  • Should you consider creating a matching entity instead, using option 4?
Beware: Some might think that updating an existing record is a good idea. Then you might produce a match, but you will also leave a footprint - that has to be removed. Updated entries are a lot harder to keep track of than inserted ones, since you need to select and remember the previous values.

4. Create your own 

This is the hardcore solution. If you want it done right, do it yourself! Our selects now become inserts and we create that entity that we need. This requires the deepest level of model knowledge since you need to know every column and table relation in order to make a valid insert.

So, if this is so great - why not use it everywhere and take the power back!? Well, there are a couple of reasons why such an approach has problems.

  • Vulnerable tests
    When you stick a large number of INSERT statements in your test setup, you are depending heavily on a stable database schema. Any new column (Non-NULL), renamed column or removed column will shred your test to pieces. And probably it will not fail in a pretty way, but in a time-consuming way that ultimately will make people question your ambitious effort.
  • Non-portable tests
    I am targeting system-level integration tests, that use the entire system - or at least as much of it as possible. Inserting data will assume that no duplicate data already exists, which is no problem in your empty developer sandbox database. However, I am guessing that empty databases are not that common in your deployed environments... Therefore moving your test suite closer to the production environment will be impossible. There's just no way that those environments will be empty.
  • Time
    Simply, this approach just takes too long. Figuring all the references out, understanding every piece of the database model even if many of them are irrelevant to what you are testing. Time can be spent more wisely.
  • Footprint
    Many inserts, large footprint. Cleaning it up is a large part of that data pain. 

Selecting the right approach

So, I have these four options, how do I select what to use when? I'll give you the architects answer: it depends. It depends on several things but I've started to think that there are two axes for categorizing test data. 
A model for categorizing test data and selecting data approach
The Y axis represents the stability of your data. As for the example of countries, a system where all countries are present everywhere - that's pretty stable data. On the other end of that axis is purely transactional data, data produced by transactions - such as orders, customers etc. 

The X axis is of a more subjective nature. For any test case there is data that you'd wish you didn't need. Its required in order to create or find more important data, it is reference data - supporting data. On the other end we have data that is what you are actually testing. Maybe I am currently testing how different customer types affect the order processing, then the customers and orders are what I am focused on. The focal data drives your tests and your software's logic. 

Making the distinction between what data is focal and supportive is crucial to avoid test data pain. It also drives us to understand what we are testing and what we are not. The not part is in my experience most important as it gives us boundaries that divides our problem into a manageable chunk. 

Summary

In projects I try to defend the quality of both the software and the tests. Schedules and pressure might lead us to think that cutting corners is a smart move, but it rarely is. That's why I wanted to bring some order to an area where I spend to much time arguing about stuff taking too long.

For some time I have advocated an approach where all data is created before the test and removed after, a strict "create your own" way. This is not only stupid but scares co-workers away from testing. Considering other options and seeing data from a focal/supportive and dynamic/stable perspective enables me to make new decisions for each situation and not try to fit every integration test into the same mold. It gives me the capability to put the effort where it is needed and put the slack where it is acceptable.

In the end, I just want higher quality tests and more of them. This might be one piece of the puzzle.

Wednesday, March 21, 2012

soapUI 4.5 Beta 2

The 4.5 Beta 2 of soapUI just got released and it certainly has some new cool features. This post is in no way a complete review, but two of the new features came surprisingly close to what I had on the top of my wish-list.

1. Environment handling
I had a disussion with my co-worker some time ago. We spoke about how we could port our test projects from one environment to the another. "Why can't we just have a drop-down where we select the current environment?!" - that was our wish. So, since the 4.0.1 version contained no such thing, I decided to work something out for myself, resulting in this blog post. Even if that did the trick, it was messy, manipulative and certainly no drop-down...  

In the 4.5 version we have a new tab on the project level, called "Environments"

This tab will contain all of your defined environments where you will run your tests. Going from an old project, with no envrionment handling, to a new one is dead simple. 

When you click the "+" to create a new environment you can select to "copy endpoints and credentials from the project". This means that all of your current endpoints will be saved in that environment.  









An environment is a set of endpoints, rest services, properties and database connections. Each of these artifacts will have a name which you can then use in your tests. For instance, if you are setting up a JDBC test step you will be able to select from the defined database connections by name. When you switch between environments all the JDBC steps using that name will be automatically targeted against the JDBC connection defined in that environment under that name. Awesome! Using environments is now completely transparent.

And the best of all, I got my drop-down:






This dropdown is available on all levels: project, test suite and test case. Switching the environment on any of these levels has a global impact, meaning that all consecutive requests (SOAP requests, JDBC requests) will target the selected environment.

For me this feature is a huge improvement. I have not yet tried the custom properties on the environment level, but I think that feature has some potential as well. It is the whole transparency aspect that appeals to me.

2. Assertion test step
Now this is a feature that many might be excited about for completely different reasons than me. This feature means that we now have a step type that is focused only on asserting. With it, you can make very complex assertions using the interface we are used to with all the guidance for XPath expressions etc.

For me, this just makes my argument for BDD using soapUI even stronger. In this blog post I described how I implement the Given-When-Then syntax into soapUI. The only quirk about it was that I hade to do some Groovy-script-ninja-tricks to get the syntax to be clean. Lets do a short recap:

  • The GIVEN steps constitute the background of the tests, the circumstances in which the test is executing. Normally this is a bunch of SOAP and JDBC requests setting up data. Many times we´ll need a bunch of steps to set everything up.
  • The WHEN steps is the actual execution of my web service. This is normally only one step, the web service call.
  • The THEN steps are the assertions. If I am asserting something from my WHEN request, the plain approach is to put the assertions inside that step. But then, we'll have no THEN step. Previously I solved this by adding a "virtual" assertion step using Groovy, but that caused some frowning among not-so-Groovy-familiar-co-workers...
Now, this last issue with the THEN steps is history. Now we have the assertion test step which is a perfect match for my THEN step. So now there really is no reason not to go BDD with soapUI.


In summary 
These two features really strengthened soapUI's position as my tool of choice for web service testing. The Environments improve maintainability, assertion steps improve readability by enabling BDD. Both features reduce the need for "ninja-stuff" in Groovy. Don't get me wrong - I enjoy a good ninja-coding-spree as much as the next developer but for me testing is all about understanding. It is about making as many stakeholders as possible understand what is happening, what the requirements are and if we are meeting them. I think we are on the right track with this tool.

Thursday, February 9, 2012

Handling test environments with Soap UI


Dealing with multiple environments for a a piece of software is something most of us do. At the very least, you'll have a testing environment that is separate from your production environment. In many cases there will be a lot of testing environments, representing the different stages of quality assurance.
Some examples might be:

  • Developer sandbox
  • Internal test
  • External test
  • Acceptance test
  • Pre-production
  • Production
When using Soap UI for testing, you want to be able to perform tests in all of your environments. Previously we kind of struggled with this since we had to have separate projects for each environment. Maintaining tests through theses stages was a real pain...

Each environment had its own endpoint for the webservice under test. But since we also test stuff with the database, each environment would have its own connection string. The endpoint problem was quire easy to handle manually through the Assign menu option in the WSDL interface screen. But reassigning the database connection was something else.

I recently managed to create a way that took away the need for those separate projects completely. This post will try to explain my way of doing it and hopefully there is someone that has an even better solution...

1. Open the "overview" tab of the project, by double-clicking it
The project view has some really good features worth exploring. Properties are really powerful, but there is more - wait and see...

2. Create a property under your project, called "Environment"

This property will hold the name of your environment, such as "Dev", "Acceptance" or "Pre-production".







3. Create one property for each of your service endpoint adresses

You'll need to create one property for each environment, which is tiresome - but on the other hand creates a good place to find those hard-to-remember-addresses!

4. Create one property for each of your database connections

I am using the jtds driver, enabling Windows authentication for SQL server using Soap UI.

5. Hook in to the event model
Now this was something new for me. Soap UI has a pretty extensive event model that enables you to execute code on certain events. 

Still in the project view, open the "Events" tab. Click on the + to add a new event handler. 
I selected the "TestSuiteRunListener.beforeRun" event which fires just before an entire test suite is run. This way, my environment configuration will fire only when I run the entire suite. Executing single test cases is something I'll do more in the development stage of things. 

There are many events to select from and I have not examined them all, but most names are pretty self-explanatory.












Now you'll end up with an empty Groovy script code window. I'll break my script up into pieces to make it easier to read. Sorry about the images, but I couldn't get syntax coloring otherwise...
The text version is here.

First we need to import some stuff..
Then i collect all of the property values we created earlier





Then lets just do an if-statement, checking which is the selected environment, keeping in mind that someone may have entered something incorrectly.

Finally to the actual code that does anything. First we loop through all the test steps of all test cases in all the suites to find steps that use the web service. There we replace the endpoint adress with the selected one. Then we repeat the procedure with our connection string. 


In summary
So, what does this do?
Whenever I run a test case this event handler will reset all requests to use the wsdl endpoint of my choice and the connection string that we want. The code isn't pretty our optimized, but that's not my concern just now. I wanted to see if it could be done. And it could. 

Any better ideas (which there must be?!) are appreciated!!

Thursday, December 15, 2011

Database integration with SOAP UI - in way over my head...

Today I embarked on a journey to create that real database integration test I always wanted.
Many times in the past I have created tests for applications that use a complex database structure to produce some results. Every time I have resorted to setting up a static data structure and then make my tests agains that collection of data.

The problem with my old approach is that it is not very stable. What if someone tampers with your data? What if you want to move the tests to a UAT environment, or worse yet - production?

The ideal scenario, I think, is to have the test
1. Set all the necessary data up
2. Do the test
3. Clean up the data  

But that sounds a lot simpler than it is.

Setting the data up
Using Soap UI I created a bunch of JDBC test steps to create every tiny bit of data that I needed. I put all of these steps in a disabled test case that I trigger manually from the actual cases.
I divided the data into two categories

  1. Stable data, or "master data"
    This data will most probably exist in all environments, but not necessarily with the same ids.
    Therefore a needed a bunch of "find XXX" test steps. I then transferred all those values into properties I could access later. 
  2. Dynamic data
    This is the data I will perform my actual tests against. Its important that I have full control over this data to create valid tests and to be able to trust the results. For this I created a bunch of JDBC steps that insert data into different tables.
One thing I discovered is that having GUIDs as primary keys, as opposed to having auto-ids, makes testing easier. I created some Groovy test steps that produced new GUIDs  for all new entities. I stored these values as properties for later access.

As you can see on the right - there were a whole bunch of steps to make this happen...

Testing against the data
This is the easy part. Since all values I needed were transferred into properties I could use them in my actual tests. I actually put the setup data test case in a disabled test suite, and that is quite useful. In my tests I don't want to care about where the data comes from or how it is created. I only want to be provided with some values that will make my tests do what I want them to. Properties in Soap UI is your friend here.

Cleaning up
To clean up I created another test case in my disabled suite that took all of the generated Ids and deleted them from the database. No trace whatsoever! But... the infrastructure to do this is quite tiresome. Read on...

Maintaining tests
I wanted to create my data once and then do many tests against it. In fact, I wanted to create some base data that would be available for all tests and then some per-test data. The per-test data could be setup using a test step in my test. It fits nicely under a GIVEN step. (See my post on BDD in Soap UI) But the clean up fits nowhere in the test. Soap UI gives you Setup and TearDown possibilities on many levels so for this I put a TearDown step on my test case.

I created a disabled test case that took care of the cleaning and I made a Groovy script call to execute it.





While this is nice and dandy for the per-test data, the global data was still an issue. I could put a setup and teardown script on the test suite level, but that requires anyone that uses these tests to never run any case in isolation. That would mean that global data would not be available or linger on long after the test was finished. 

So, I put all of the data creation and cleaning on all the test cases. That did nothing good to my performance... What I am considering now is to have some sort of flagging for the setup so that a test case will not set global data up if it has already been setup.

BUT... I think flags like "IsSetup" is kind of a smell. So, here I am - I have stable tests that perform crappy. Do I care? Well, if the trade-off is between performance and stability, I choose stability any day. But I would really like to find a better way of doing this. Maybe it is not supposed to be done at all? Maybe I am grasping for test Utopia?

With those indecisive words, I bid you good night. 

Ps. Any other suggestions on how to do these kind of tests, or motivations on why I shouldn't at all, are appreciated. Ds.

Thursday, December 1, 2011

Going end-to-end with Soap UI

One cool thing we managed to do with Soap UI is create some real end-to-end tests. Many times you might test almost an entire feature. It simply takes to long to test it all, and you settle for some manual testing of those final things.
One example we had was a web service that, when called, would change the status of something in the database and then publish a file on a network share. Testing the database change is simple but the file part was too hard, or at least I thought it was...

My scenario looked something like this:

GIVEN an existing order
WHEN the order status is changed to "Processed"
THEN the order changes status to "Processed"
AND an event for "Processed" is published to an XML file

The Given means to setup something either using a Test Request step or a JDBC Step.
The When is the actual service call, a Test request step.
The Then we did by asking another service to give us the status of the order. But it could have been done by a JDBC step.
The And is the interesting part. That required a bit of Groovy script programming.

Checking that a file exists
This is dead simple in Groovy:
def file = new File(orderStatusFilePath);
assert file.Exists();


Finding the newest file
In my case I did not know the file name. I knew that I file would be published in a folder and that I had to grab the latest one. Maybe this is something bad in our design, but thats the way it is - how do we test it?

First, lets declare our search criteria. I want files that are at most 1 day old:
def today = new Date()
def criteria = {file -> today - new Date(file.lastModified()) < 1}


Then, I want to find the newest file. I always use a setting in my test suite for storing file paths. The actual searching might seem complex, but its really just a matter of listing the files that match our criteria, define how we want them compared, sort them using that comparison and take the last one. 


def orderStatusFilePath = 
        context.expand( '${#TestSuite#OrderStatusFilePath}' )
def xmlFile = new          File(orderStatusFilePath).listFiles().findAll(criteria).sort() {
   a,b -> a.lastModified().compareTo b.lastModified()
 }.last();


Its worth noticing that a File object in Groovy can point to a particular file or a directory. In my first example I located a single file. This last example used a file to locate a directory. I found this solution using Google for maybe half an hour, there is tons of material on Groovy out there!

Parsing XML
Just finding that file made me very happy - but not satisfied. Since it is an XML file, why not look inside and see that the correct status was set. That is the kind of testing you'll often do once or twice and then trust it works forever. By making it a part of the test suite, we'll know.

Groovy has some real good support for XML. First, create a parser based on our file contents.

def parsedXML = new XmlParser().parseText(xmlFile.Text)


Then, within that XML document, find the tag <Status> and get the text within.
def eventStatus = parsedXML.status.text()


And now - for the icing on the cake, assert!
assert 'Processed' == eventStatus

In conclusion
Testing service calls and databases has been part of our test suites in Soap UI in the past. But adding file checking and XML parsing really boosted those suites. They have now become real end-to-end tests. And I thought I saw a small tear in the eye of my test leader...

I have just begun to use Groovy test steps in my test suites but they seem really powerful. The only downside is that its hard for a non-developer to grasp the concepts in detail. But I think that if we name our steps using the Given-When-Then syntax, what lies behind the steps becomes less important.

But what if Soap UI came with this feature built-in? That would be sweet!