I've been working in professional software development for more than 10 years now.
I love to write working software, and I hate fixing bugs.
Hence, I developed a strong focus on test automation, continuous delivery/deployment and agile principles.
Later I came to the insight that the most sustainable way of fixing code is, to optimize those who code.
For that reason I dug deeper into psychological safety, cognitive biases and ways to spread knowledge within software producing organizations.
Since 2014 I work at REWE digital as a software engineer and internal coach for quality assurance and testing.
As such my main objective is to support our development teams in QA and test automation to empower them to write awesome bug-free software fast.
How (Not) to Measure Quality
Measuring quality requires many questions to be answered.
The most obvious ones may be: “What is quality?”, but also “How can we measure it?”, “Which metrics are most accurate?”, “Which are most practical?”.
In my experience, one question is often not answered or postponed until it is too late: “Why do we want to measure quality?”
Is it because we want to control how well our developers are performing?
Is it to detect problems early?
Is it to measure the impact of changes?
Is it the product or the process we care about?
Is it to improve locally in a single team or globally across the company?
Is there a specific problem that we are trying to solve, and if so, which one?
Instead of trying to define what software quality is–which is hard and depends on a lot of factors–we should first focus on the impact of our measuring.
Some metrics may work great for one team, but for the company as a whole. Some will help to reach your team or organizational goal, some will not help at all, and some will even have terrible side effects by setting unintended incentives.
Some can be gamed, others might be harmful to motivation.
Consider an overemphasis on lead time, which can lead to cutting corners.
Or measuring the number of bugs found, which can cause a testers versus developers situation.
In this talk, I share some general motivations for measuring quality.
I review various commonly used metrics that claim to measure quality.
Based on my experience, I rate them regarding how they may be helpful or harmful to achieve actual goals and which side effects are to be expected.
I give some examples how the weaknesses of one metric might be countered by another one to create a beneficial system.
- Identify different approaches on how (not) to measure quality.
- Assess commonly used quality metrics against different purposes.
- Be aware of possible side effects of measurements.
- Understand how metrics can be combined to even out each other's weaknesses.
Writing Tests Like Shakespeare
Automated tests—especially UI tests—often lack readability.
They have either a very fine-grained description of the performed actions, or a too sophisticated abstraction which leaves the reader guessing or digging deep into the code base.
This becomes a serious problem when we try to understand the context of a test failure.
A NoSuchElementException only tells us which element was missing, not why we expected it to be there at the specific point in our test.
Another common issue in complex tests is code duplication.
Either we just copy and paste long sequences of dull commands, or we forget to use the functions and utils we hide.
This makes maintenance a very frustrating experience.
Finally, such code is often not fit for being shared among teams working on the same product, or for being reused in different tests.
The Screenplay pattern offers a user-centric way of writing tests which abstracts fine-grained interactions into simple non-technical tasks and questions.
These make it easy to introduce just the right level of abstraction for the use case at hand.
The resulting test code is almost plain natural language with only a few extra “cody” characters, but I assertThat(averageReader.does(understandThis())).isTrue().
As every failure is happening in the context of performing a task or answering a question, understanding failures also automatically becomes a much easier endeavor.
Sharing Screenplay code between tests or even among teams is pretty easy as the tasks and questions are implemented as simple immutable objects.
This also makes it easy to implement Screenplays in any preferred language and framework.
So if this sounds like a good idea to you, come to my talk and learn how to write tests like Shakespeare.
- Learn what the Screenplay pattern is and how it can help you to write better tests.
- Get to know the key concepts of Screenplay to implement your own or use an existing framework supporting it.
- Discover how object-oriented design can help to make test code less cumbersome.
- Find out about a way to keep your tests concise and readable, while using all the Selenium tweaks and tricks to keep them fast and reliable.
- See how Screenplays allow you to write tests that use different media –like web, email or APIs–, a surprisingly easy experience.
Fantastic Biases & Where to Find Them in Software Development
Why did all our test cases fail because of this simple bug?
Nobody tried that out before?
How did five people agree to implement this terrible feature?
Why are our estimates always so far off?
There are many possible answers to those questions and none of them will be the whole truth.
However, certain common cognitive biases might play a main role in all the events leading to those questions.
We all have them.
They help us to think faster, but they also make us less rational than we think we are.
They hinder our best judgement!
In this talk I'll demonstrate some of the most severe biases, explain their background, point out how they typically influence our professional decisions, and suggest some strategies to mitigate their effect.
Being able to recognize and overcome biases in us and others is a long, challenging road for anyone – you won’t be able to do that journey with this talk alone, but you’ll certainly take your first step!
- Understand what cognitive biases are.
- Acknowledge that you are biased – like everyone else is as well.
- Get to know some of the most severe biases.
- Learn about some mitigation strategies.
From Monolith Testing to Microservice Quality Assurance
When REWE digital started to sell groceries online, we launched with a massive monolithic piece of software developed in only six months by a software agency.
Right after launch we started to build up our own software teams to take over further development, but we had a hard time developing new features without breaking existing functionality…
…today the monolith is still in place, but most of its functionality has been replaced by microservices which are communicating via asynchronous messaging and deliver their own frontends.
In this session we will talk about challenges we faced over the past three years:
- optimizing the monolith's architecture for faster feature development
- breaking it apart into microservices
- adjusting the QA strategy from a single deployment release process to 40 teams deploying their services whenever they want to
- developing new types of testing for microservices and micro-frontends
- solving problems with testing asynchronously-communicating microservices
- organizing QA in a rapidly growing company
new English slides,
old English slides,
old German slides,
Video from REWE digital Meetup Ilmenau (German)
Team-Driven Microservice Quality Assurance
While the Microservice architectural style has a lot of benefits, it makes certain QA practices impractical: there is no big release candidate that can be tested before put to production, no single log file to look into for root cause analysis and no single team to assign found bugs to. Instead, there are deployments happening during test runs, as many log files as there are microservices, and many teams to mess with the product.
At REWE digital we took a strictly team-driven QA approach.
Our teams tried a lot of good and bad ideas to QA our microservice ecosystem.
This involves automated testing, but also monitoring, logging and alerting practices.
In this talk I will present some of the best of those ideas, like testing microservices in isolation including UI tests, posting deployment events to a chat room, add team names to log lines or team-driven monitoring on service metrics.
Also I will talk about some ideas that failed for us, like building a comprehensive test suit for the overall product or a company-wide QA guild.
How to Build a Test Library for a Microservices-Based Web Application with Geb & Spock
At REWE digital we are building & maintaining a Microservice based e-commerce web application.
Our service teams work quite autonomous & are responsible for their own services' quality.
They decide which measures are appropriate & efficient in order ensure no bugs in production.
Many have a reasonable code coverage via unit tests, a good lot of service tests –including UI tests– & a sufficient monitoring & alerting system.
However, several teams felt the need for a more integrated testing of the whole system to prevent CSS clashes, errors due to interface changes or eventual inconsistency disasters & many many unforeseen issues.
To support these teams, we decided turn our old retired comprehensive test suite into a test library to enable teams to write their own system tests without the need to implement every stupid step in every team.
In this talk I'd like to present our lessons learned & developed design patterns from implementing such a test library with Geb & Spock.
Slides, Video from Greach 2019
gut besser, Kontrolle ist besser schädlich (Pecha Kucha)
Klassisch werden Software-Projekte häufig mit separierten Entwicklungs- und Test-Teams durchgeführt, während in der agilen Softwareentwicklung in der Regel Tester und Entwickler in einem Team zusammengefasst werden.
Letzteres fühlt sich für mich persönlich sehr viel besser an, aber warum ist das so?
In diesem Vortrag gehe ich auf verschiedene Aspekte ein, die aus meiner Sicht die Verwendung eines separaten Test-Teams regelrecht umständlich und sogar kontraproduktiv ist und was den agilen Ansatz so viel erfolgreicher und sinnvoller macht.
Meet your own Biases
You’ve certainly heard that word before: “bias”. Today, a lot of controversial topics surround that word and for a good reason. After all, bias is at the core of a lot of discrimination and prejudice issues in our world.
However, did you know there are many types of biases that influence our judgement every day and are not related with discrimination?
For instance, have you heard of Loss Aversion? It states that humans experience losing something much more intensely than they do when acquiring it. It really affects our judgement, for instance, when you are contemplating on the idea of deleting an automated test!
Maybe the Gambler’s fallacy influences the way you handle flaky tests? Or perhaps the Spotlight Effect blocks you from driving changes in your organization?
In this workshop, we want you to experience some of these cognitive biases first-hand! After all, acknowledging that our behavior, as human beings, is impacted by these factors is the first step in learning how to improve our rational judgement. We’re also going to try to relate these behaviors with our professional lives. Maybe you can even come up with your own ideas on how cognitive biases hinder our abilities as testers and engineers.
Let’s learn together! So join us and, please, bring your cognitive biases with you!
- Experience for yourself some cognitive biases that affect our day-to-day rational judgement.
- Understand how cognitive biases are connected to some of our behaviors as professionals.
- Learn about materials you can follow up on if you’re interested in knowing more about cognitive biases.
Exploratory Testing Workshop
In this workshop I explain the basics of exploratory testing.
By taking this workshop, you will learn what exploratory testing is and how it might be useful to you.
In the exercises you will either explore your own product or a similar commonly known one in order to deepen you understanding of the principles of exploration.
Spock Testing Workshop
This workshop is about the Groovy-based testing framework Spock.
By taking this workshop, you will learn how to add the framework to an existing JVM project and its benefits for your testing.
Geb Testing Workshop
This workshop is about the Groovy and Selenium-based web testing framework Geb.
By thanking this workshop, you will learn how to create readable, semantic and maintainable tests for an existing website or application.
Groovy Workshop for Java™ Developers
This workshop introduces the Groovy language as an alternative to Java™.