Talks
Level Up Your Testing Toolkit!
Have you ever heard about property-based testing?
Do you know about mutation testing?
Are you familiar with approval testing?
What is your opinion about fuzzing?
Many terms, techniques, and tools have come up since the invention of unit testing.
But who has time to look into all of them?
Thankfully, over the last few years, we were fortunate enough to have that time.
In this talk, we want to share the insights we gained with you.
Hence, we will lead you through each of the terms, explain their most important characteristics, and in which cases they hold value.
This may not make you an expert, but gives you enough of an impression, to judge for yourself if a topic is worth further investigation.
Key Learnings
- Learn about new and specialized testing terms, techniques, and tools.
- Be able to judge if they are applicable and valuable to your specific situation.
- Be aware of potential alternative solutions for classic testing.
Material
Slides
Example Code
How (Not) to Measure Quality
As software developers, my team and I were repeatedly in the situation of fighting for more quality and less feature pressure.
In doing so, we often achieved concessions, but often the time we were given was insufficient and the result correspondingly unsatisfactory.
If I am honest, we ourselves were not in a position to say exactly how bad the quality actually was.
Moreover, I have noticed time and again that developers, users and managers of software have fundamentally different priorities when it comes to quality.
Developers often think about internal aspects like code quality and maintainability, users think about external features like bugs and usability, and managers need predictability and efficiency in the development process.
Of course, there are various metrics that try to measure the quality(s) of software, but these often only refer to very small sub-aspects and have more or less harmful side effects.
In my talk I would like to point out the side effects of quality measurements and show a method for finding metrics that work better.
I will point out weaknesses of single classical quality metrics and suggest better suited alternatives.
In the end, this results in a network in which each metric is justified by a clear goal and a concrete question and in which weaknesses are mutually balanced.
Key Learnings
- Identify different approaches on how (not) to measure quality.
- Assess commonly used quality metrics against different purposes.
- Be aware of possible side effects of measurements.
- Understand how metrics can be combined to even out each other's weaknesses.
Material
Slides
Writing Tests Like Shakespeare
Automated tests—especially UI tests—often lack readability.
They have either a very fine-grained description of the performed actions, or a too sophisticated abstraction which leaves the reader guessing or digging deep into the code base.
This becomes a serious problem when we try to understand the context of a test failure.
A NoSuchElementException only tells us which element was missing, not why we expected it to be there at the specific point in our test.
Another common issue in complex tests is code duplication.
Either we just copy and paste long sequences of dull commands, or we forget to use the functions and utils we hide.
This makes maintenance a very frustrating experience.
Finally, such code is often not fit for being shared among teams working on the same product, or for being reused in different tests.
The Screenplay pattern offers a user-centric way of writing tests which abstracts fine-grained interactions into simple non-technical tasks and questions.
These make it easy to introduce just the right level of abstraction for the use case at hand.
The resulting test code is almost plain natural language with only a few extra “cody” characters, but I assertThat(averageReader.does(understandThis())).isTrue().
As every failure is happening in the context of performing a task or answering a question, understanding failures also automatically becomes a much easier endeavor.
Sharing Screenplay code between tests or even among teams is pretty easy as the tasks and questions are implemented as simple immutable objects.
This also makes it easy to implement Screenplays in any preferred language and framework.
So if this sounds like a good idea to you, come to my talk and learn how to write tests like Shakespeare.
Key Learnings
- Learn what the Screenplay pattern is and how it can help you to write better tests.
- Get to know the key concepts of Screenplay to implement your own or use an existing framework supporting it.
- Discover how object-oriented design can help to make test code less cumbersome.
- Find out about a way to keep your tests concise and readable, while using all the Selenium tweaks and tricks to keep them fast and reliable.
- See how Screenplays allow you to write tests that use different media –like web, email or APIs–, a surprisingly easy experience.
Material
Slides,
Example Code,
Shakespeare Framework
Fantastic Biases & Where to Find Them in Software Development with João Proença
Why did all our test cases fail because of this simple bug?
Nobody tried that out before?
How did five people agree to implement this terrible feature?
Why are our estimates always so far off?
There are many possible answers to those questions and none of them will be the whole truth.
However, certain common cognitive biases might play a main role in all the events leading to those questions.
We all have them.
They help us to think faster, but they also make us less rational than we think we are.
They hinder our best judgement!
In this talk I'll demonstrate some of the most severe biases, explain their background, point out how they typically influence our professional decisions, and suggest some strategies to mitigate their effect.
Being able to recognize and overcome biases in us and others is a long, challenging road for anyone – you won’t be able to do that journey with this talk alone, but you’ll certainly take your first step!
Key Learnings
- Understand what cognitive biases are.
- Acknowledge that you are biased – like everyone else is as well.
- Get to know some of the most severe biases.
- Learn about some mitigation strategies.
Material
Slides
From Monolith Testing to Microservice Quality Assurance
When REWE digital started to sell groceries online, we launched with a massive monolithic piece of software developed in only six months by a software agency.
Right after launch we started to build up our own software teams to take over further development, but we had a hard time developing new features without breaking existing functionality…
…today the monolith is still in place, but most of its functionality has been replaced by microservices which are communicating via asynchronous messaging and deliver their own frontends.
In this session we will talk about challenges we faced over the past three years:
- optimizing the monolith's architecture for faster feature development
- breaking it apart into microservices
- adjusting the QA strategy from a single deployment release process to 40 teams deploying their services whenever they want to
- developing new types of testing for microservices and micro-frontends
- solving problems with testing asynchronously-communicating microservices
- organizing QA in a rapidly growing company
Material
new English slides,
old English slides,
old German slides,
Video from REWE digital Meetup Ilmenau (German)
Team-Driven Microservice Quality Assurance
While the Microservice architectural style has a lot of benefits, it makes certain QA practices impractical: there is no big release candidate that can be tested before put to production, no single log file to look into for root cause analysis and no single team to assign found bugs to. Instead, there are deployments happening during test runs, as many log files as there are microservices, and many teams to mess with the product.
At REWE digital we took a strictly team-driven QA approach.
Our teams tried a lot of good and bad ideas to QA our microservice ecosystem.
This involves automated testing, but also monitoring, logging and alerting practices.
In this talk I will present some of the best of those ideas, like testing microservices in isolation including UI tests, posting deployment events to a chat room, add team names to log lines or team-driven monitoring on service metrics.
Also, I will talk about some ideas that failed for us, like building a comprehensive test suit for the overall product or a company-wide QA guild.
Material
new slides,
old Slides
How to Build a Test Library for a Microservices-Based Web Application with Geb & Spock
At REWE digital we are building & maintaining a Microservice based e-commerce web application.
Our service teams work quite autonomous & are responsible for their own services' quality.
They decide which measures are appropriate & efficient in order ensure no bugs in production.
Many have a reasonable code coverage via unit tests, a good lot of service tests –including UI tests– & a sufficient monitoring & alerting system.
However, several teams felt the need for a more integrated testing of the whole system to prevent CSS clashes, errors due to interface changes or eventual inconsistency disasters & many many unforeseen issues.
To support these teams, we decided turn our old retired comprehensive test suite into a test library to enable teams to write their own system tests without the need to implement every stupid step in every team.
In this talk I'd like to present our lessons learned & developed design patterns from implementing such a test library with Geb & Spock.
Material
Slides, Video from Greach 2019
Vertrauen ist gut besser, Kontrolle ist besser schädlich (Pecha Kucha)
Klassisch werden Software-Projekte häufig mit separierten Entwicklungs- und Test-Teams durchgeführt, während in der agilen Softwareentwicklung in der Regel Tester und Entwickler in einem Team zusammengefasst werden.
Letzteres fühlt sich für mich persönlich sehr viel besser an, aber warum ist das so?
In diesem Vortrag gehe ich auf verschiedene Aspekte ein, die aus meiner Sicht die Verwendung eines separaten Test-Teams regelrecht umständlich und sogar kontraproduktiv ist und was den agilen Ansatz so viel erfolgreicher und sinnvoller macht.
Material
Folien
Workshops
How to Untangle Your Spaghetti Test Code with Christian Baumann
In many teams we worked in, test code was treated much less carefully than production code.
It was expected to just work. Mindless copy and paste of setup code from one test case to another was never seen problematic, duplications widely accepted, and things were named randomly.
This always leads to problems: gaps in assertions become pretty non-obvious; consolidating long-running test suites becomes a cumbersome task; magic numbers need to be changed all across the suite, when they become outdated.
All of this affects the overall maintainability of our code base.
Over the years we identified several good practices to prevent these problems and keep test code maintainable.
Some borrowed from general good code quality standards, some specific for test code.
In this workshop, we are going to briefly discuss the properties of good test code.
Then we’ll present our good practices and let you apply these to a prepared test suite.
Lastly you will discuss action items in your day job.
Key Learnings
- Learn code quality criteria that apply to test code.
- Recognize anti-patterns in your test code.
- Apply some simple good practices that help to keep your test code maintainable.
- Take away concrete action items for your day job.
Material
Code,
Slides,
Cheat Sheet
Let's Get Into Coding with Stefan Scheidt
Coding is often seen as a kind of superpower.
Only the “chosen ones” are able to practice this art.
That’s wrong! Coding can be learned by anyone! In fact, a lot of developers learned coding on their own.
Most of them started with just some initial knowledge and a motivation to make the machine do something they wanted.
In this workshop we aim to give you that experience.
We will provide you with that initial knowledge and setup to get you going and let your own motivation do the rest, step by step:
We will provide you with a prepared project, give an introduction to its structure and make sure that everyone is able to work on it.
Next you will alter existing functionality to get into the programming language.
In the end, you will get the chance to build a completely new feature into the script.
These exercises aim to get you hooked onto coding.
To keep you going after the workshop, we also offer you to stay connected as a community of learners.
For that we will set up a Slack channel you can join for getting and (eventually) giving support.
Key Learnings
- Experience the power of coding and how it can help you with your daily work.
- Learn the basics of a scripting language to get your coding journey started.
- Create your very first self-built tool custom-fit to your recurring tasks.
- Join a community of fellow learners to keep you going.
Meet your own Biases with João Proença
You’ve certainly heard that word before: “bias”.
Today, a lot of controversial topics surround that word and for a good reason.
After all, bias is at the core of a lot of discrimination and prejudice issues in our world.
However, did you know there are many types of biases that influence our judgement every day and are not related with discrimination?
For instance, have you heard of Loss Aversion? It states that humans experience losing something much more intensely than they do when acquiring it.
It really affects our judgement, for instance, when you are contemplating on the idea of deleting an automated test!
Maybe the Gambler’s fallacy influences the way you handle flaky tests? Or perhaps the Spotlight Effect blocks you from driving changes in your organization?
In this workshop, we want you to experience some of these cognitive biases first-hand!
After all, acknowledging that our behavior, as human beings, is impacted by these factors is the first step in learning how to improve our rational judgement.
We’re also going to try to relate these behaviors with our professional lives.
Maybe you can even come up with your own ideas on how cognitive biases hinder our abilities as testers and engineers.
Let’s learn together! So join us and, please, bring your cognitive biases with you!
Key Learnings
- Experience for yourself some cognitive biases that affect our day-to-day rational judgement.
- Understand how cognitive biases are connected to some of our behaviors as professionals.
- Learn about materials you can follow up on if you’re interested in knowing more about cognitive biases.
Exploratory Testing Workshop
In this workshop I explain the basics of exploratory testing.
By taking this workshop, you will learn what exploratory testing is and how it might be useful to you.
In the exercises you will either explore your own product or a similar commonly known one in order to deepen you understanding of the principles of exploration.
Material
Slides
Spock Testing Workshop
This workshop is about the Groovy-based testing framework Spock.
By taking this workshop, you will learn how to add the framework to an existing JVM project and its benefits for your testing.
Material
Code
Geb Testing Workshop
This workshop is about the Groovy and Selenium-based web testing framework Geb.
By thanking this workshop, you will learn how to create readable, semantic and maintainable tests for an existing website or application.
Material
Code
Groovy Workshop for Java™ Developers
This workshop introduces the Groovy language as an alternative to Java™.
Material
Code