This time of year normally means going to Oredev for me. Being a developer, a conference for developers would seem as the natural choice for me. But this year I took a leap of faith. I decided to attend EuroSTAR instead, a software testing conference in Gothenburg. My interest in quality and testing made me curious on if I could find alternative perspectives on the subject by seeing it from a tester’s point of view.
The first impression, after a glance of the content was that there is one big difference for me. A superficial one, but a difference non the less. At a developer conference, there are always a few “rock-stars” from the community. In the testing world, I am not as familiar with the "celebrities", so the possibility of becoming star-struck seemed slim. That in itself doesn't mean that the content will be equally great, but it always adds to the experience to see idols such as Scott Hanselman, Dan North or Gojko Adzic in real life. All in all, the testing community is as new to me as testing conferences.
I went in to the conference with mixed emotions. One part was very excited to be in an environment where everyone is as excited about software quality as I am, the other part nervous that not choosing a developer conference was a bad idea.
Keynote 1: “Skeptical self-defense for the serious tester” – Laurent Bossavit
The keynote began with some administrative information about rules and practices for the conference. EuroSTAR is using an interactive scheme for sessions, with Q & A as an essential part of the talks. My previous conferences have also had Q & A in the session, but often only if time would permit it. And since many speakers run over the assigned time – discussions at the end are mostly limited.
Another new thing is how the Q & A part is facilitated. All attendees are given three colored sheets of paper and a unique number. If I want to ask a question on a new subject, I raise the green card. If I want to continue on the current topic, I use the yellow card. If I have some super-urgent stuff to say, I use my red card. At first, this seemed overly ambitious. We are after all in Sweden, the country of people that will avoid speaking in public like the plague. But I was surprised to see so many people falling into this technique so rapidly and it was facilitated in a very professional way. However, it seems like the attendees are very international and most questions came from the non-Swedish part of the crowd, so my plague theory might still be valid… The key thing though was that it worked a lot better than I expected.
The keynote itself was about questioning facts. There have been a lot of statements over the years that have been accepted as an absolute truth, such as the cost increase of finding bugs late in the development process. Laurent encouraged us to question such statements and seek the actual facts behind these claims. To be skeptics. I think he got his message across, and it is a valid message, but it was a little long-winded and the talk was a bit slow in pace for my taste. All respect to Laurent, who really has put a lot of effort into his research and obviously knows alot about it. One funny coincidence – when I came home and watched some children’s TV with my kids, there was a show called “Is it true?”. This show made about the same statements about being skeptics and it kind of carried the same message. In 15 minutes, on a kids level. Maybe it was the fact that he had the “graveyard shift” after lunch that affected my experience. Damn you Chicken tikka masala J
“Experiences of test automation at Spotify” – Kristan Karl
I saw this talk before, at MeetUI in the spring, but this version was longer at delved into the team structure at Spotify among other things. The setup at Spotify is really impressive, both from an organizational and a test automation standpoint. Karl described the really cool automated tests that ensure Spotify quality on multiple platforms. The tooling used was all open-source, to be scalable economically. He also described the notion of "guilds" at Spotify, cross-team interest groups that exchanged experiences in a common field of interest, such as automation or deployment. One thing that is perfectly clear is that Spotify is a product that everyone knows and are interested in. The session was packed beyond its limit and a swarm of people hovered around Karl afterwards. A great talk all in all!
"Questioning acceptance tests" - Adrian Rapan
Given the topic of this talk, I was expecting to see something about acceptance tests which for me means tests that face the business. But this session took a completely different path. Adrian described how they had come up with automated tests for calculations of a trading application. It was a pretty cool thing they had invented, even if I am still not sure where to place these tests. They were, as Adrian admitted, not readable by business people. He described how they had used SPOCK, QuickCheck and property based testing to generated hundreds of tests for financial trading business rules. Interesting stuff, but not as much acceptance tests as I expected. As a developer, I like code and cool technical stuff but I am not sure how many testers followed on what Adrian was talking about. I am not even sure I did... :)
Keynote 2: "Testing Machines As Social Prostheses" - Robert Evans
How are computers and humans different? Humans an understand a social context, a computer can not. Even if we throw more and more rules at a computer, it will not be able to mimic human behavior completely. This is what Robert contented, and he made a very convincing argument. He used the example of a spell checker, that can only use a dictionary to make sense of language. A human can understand that a spelling error might be okay, given a certain context. Robert also talked about how we as humans can accept flaws in computer behavior compared to the corresponding human version. We "fill in the gaps" so that we can interact effectively. An automatic check-out station at the grocery store is not a human cashier, but we're willing to accept that and adapt to the new behavior. An intriguing talk, in a fast and steady pace by someone that knew what he was talking about. Cool stuff, although I'm not at all sure how I'll put it to use. One takeaway is that I am no longer expecting any human-like robots any time soon...
Keynote 3: "Creating Dissonance: Overcoming Organizational Bias Toward Software Testing" - Keith Clain
This was the best talk of the conference. Keith is an awesome presenter and he really delivered. He spoke about the bias that testers and testing encounters in the business. Tests costs too much, take too much time, are useless etc. He described approaches to take to fight the biases and how to start making a difference. What I took away from his advice was
- We should become field experts, read up!
- We should strive to win over everyone, not only company leaders. If we only convince the CEO and he/she is replaced, no change will stand.
- Don't settle for mediocrity
- We will fail. And fail again. Persistence is key!
"Specification By Example Using GUI Tests - How Could That Work?" - Geoff & Emily Bache
This was a talk I was really looking forward to, as it describes something I have been struggling with a lot. Testing user interfaces is hard, combining it with specification-by-example is even harder. First of all, Emily made a really good introduction that went through the challenges of GUI testing and Specification-By-Example in a rapid but clear way. Then Geoff made a demo of the tool they used to tie the specifications to the GUI, TextTest. This was a completely wild approach! Every component in the GUI is rendered in ASCII and the expectations of the component is expressed just that way - in ASCII art. Another cool thing was how they used record-playback to read what happend in the GUI as someone was clicking around. After the recording was completed, you could give each specific action a domain specific name. For instance, selecting a row in a table of animals became "Select animal". It then automatically understood the data in the table, creating reusable steps such as "Select animal Horse" or "Select animal Dog". Once actions are described and recorded you can play them back later to see that the rendered ASCII art matched the first run.
On the positive side I think that examining the whole screen, as opposed to checking single components might find more bugs that traditional record-playback approaches. The magic of finding actions also seemed cool. But it seems that it puts a lot of constraints on the UI framework used. TextTest only supports some frameworks, such as PyGTK. Also, describing it as Specification-By-Example seems a bit false. The notion of assertions is completely gone, making the tests not that readable to business people. As it turned out, the tests they had created were not yet used "so much" by busniess people. Having a customer read ASCII art, well... I am not sure. But it is a fresh approach that's for sure.
With Cloud Computing, Who Needs Performance Testing - Albert Witteveen
To be honest, performance testing is not for me. Whenever the subject is brought up, I run away. It seems to be that performance testing is a skill, a profession of its own. I am more of a functional testing guy. Having said that, I didn't have the best reasons to attend this session. But this time slot was thin for me, and this was what I ended up with. Albert is the kind of guy I would like to have a look at my systems, because he really seems to have this stuff down. He spoke about queuing theory being the foundation of understanding performance and how computers behave. Finding where stuff queues up is the key to finding bottlenecks. He spoke about how performance testing has changed due to the existence of cloud-based solutions. But when asked on how to find bottlenecks, he pretty much answered what I thought: that it is hard and required a lot of skill and experience. I am glad that there are people like Albert how love this stuff, because I definitely don't.
Automation: Time to Change Our Models - Iain McCowatt
Oh, man. This talk might have been better than I first gave it credit for. I am an avid defender of automated tests and pretty much believe in everything that Ian threw out as not that valuable. He spoke of how we use automation when testing. He contends that automation could serve a purpose as an instrumnent to give us knowledge that drives our testing forward. We should use automation tools to help us dig into the system and get us data. Once that data is there, testers should examine it and use it as input to make qualified decisions on if it is valid or not. Iain is a confident guy. He makes his case very clearly but throwing away automated tests as not valuable doesn't resonate with me. Maybe I need to change my mental model, but I am not exactly sure what I am replacing it with. Some of the questions afterwards touched on this as some people wondered if it's not a matter of expanding the model we have with this new one. I believe that is very true. Of course we shouldn't blindly use automation instead of the skill of professional testers. The testers should put automation to use. But for regression tests, I believe that an automated set is the best way to go. Especially if you have a system that is changing a lot in central parts. Exploratory testing aided by tools in combination with a decent automated regression suite is the way to go if you ask me.
Cross Team Testing – Managing Bias - Johan Åtting
This session was about how Johan and his teams helped each other out with testing. With every autonomous Scrum team having their own testers, sometimes testers got to comfortable and adapted their testing to how the software was built. They knew too much, basically. They introduced a recurring event where testers would team up and test part of the software where they had not been part of the building process. This resulted in better quality and better software. A pair of unbiased eyes is valuable. Johan builds software for the health-care industry where software errors kill people. Something to think about...
Agile Quality for the Risk-Averse - David Evans & Tim Wright
This talk was basically about how they adopted agile and how they managed prioritization and risk. A lot of models, guidance, boxes and arrows. Their teams had no testers, everyone is doing tests. This is something I believe is a good way to go. It makes everyone accountable for testing and prevents stuff from being thrown over fences. A part from that, this session had poor timing. My mail box was going berserk and my focus was on other software quality issues, closer to where I work. So, I came away with very little from this one. Sorry about that...
Moving To Weekly Releases - Rob Lambert
This was a great one. Rob talked about how their company had gone from 9 month big-bang releases to releases every week. Basically by adopting agile with everything that comes with it. Take aways:
- Make features togglable, so that features can be turned off if they don't work.
- Put testing in the center of your process, not testers
- Delivering often makes it less dramatic and gives more frequent rewards
- Use a pre-release where selected users (in their case, their own company) can play around with the upcoming release before other users.
I have long dreamed of the situation that VoiceMedia obviously is in. A place where releases are frequent and quality and testing is everyones business. That to me is what agile is all about.
Post a Comment