Michael Kutz

Biography

I've been working in professional software development for more than 12 years now. I love to write working software, and I hate fixing bugs. Hence, I developed a strong focus on test automation, continuous delivery/deployment and agile principles.

Later I came to the insight that the most sustainable way of fixing code is, to optimize those who code. For that reason I dug deeper into psychological safety, cognitive biases and ways to spread knowledge within software producing organizations.

Since 2014 I work at REWE digital as a software engineer and internal coach for quality assurance and testing. As such my main objective is to support our development teams in QA and test automation to empower them to write awesome bug-free software fast.

Tour

These are my talk appointments. Videos and slides will be added to past events when available.

2024

2024-11-19…22 Agile Testing Days
Dorint Sanssouci Potsdam
2024-11-06…08 Øredev 2024
Untangle Your Spaghetti Test Code Workshop MalmöMässan
2024-08-22…25 SoCraTes 2024
Hotel Park Soltau
2024-06-22 DigitalXchange
Wie man Qualität (nicht) misst TH Köln – Campus Gummersbach
2024-05-07 Agile Testing After Work
How (Not) to Measure Quality REWE digital Köln Carlswerk Office
2024-04-09…11 JavaLand
How (Not) to Measure Quality Nürburgring 1927

2023

2023-11-13…16 Agile Testing Days
How to Untangle Your Spaghetti Test Code with Christian Baumann Dorint Sanssouci Potsdam
2023-09-26…27 TACON 2023
How (Not) to Measure Quality Design Offices Leipzig Post
2023-06-13…15 Agile Testing Days Open Air
Let’s Get Into Coding with Stefan Scheidt Black Foot Beach Cologne
2023-03-28…31 Booster Conference
Meet Your Own Biases with João Proença Radisson Blu Royal Hotel, Bergen
2023-03-23 TestBash Spring
How (Not) to Measure Quality online
2023-02-06…09 OOP
How (Not) to Measure Quality online

2022

2022-11-21…24 Agile Testing Days
How (Not) to Measure Quality Dorint Sanssouci Potsdam
2022-11-08…09 Softwareforen Leipzig COMMUNITY DAYS Softwaretest & Qualitätssicherung
Wie man Qualität (nicht) misst Alte Essig Manufactur
2022-10-26…27 TestCon Europe
Fantastic Biases & Where to Find Them in Software Development with João Proença online
2022-10-17…18 AutomationSTAR
Tests schreiben wie Shakespeare – Komplexe Tests organisieren mit dem Screenplay Pattern Hilton München Park
2022-05-17…19 Agile Testing Days Open Air
Meet Your Own Biases with João Proença and Writing Tests Like Shakespeare Black Foot Beach Cologne

2021

2021-11-15…19 Agile Testing Days
Fantastic Biases & Where to Find Them in Software Development with João Proença Dorint Sanssouci Potsdam
2021-10-27…28 Agile Tour Vilnius
Meet Your Own Biases with João Proença online
2021-09-29 Ministry of Testing 99 Minute Workshop
Meet Your Own Biases with João Proença online @ Ministry of Testing

2020

2020-11-08…13 Agile Testing Days
Meet Your Own Biases with João Proença COVID-19 Dorint Sanssouci Potsdam online
2020-09-29…20 TACON
From Monolith Testing to Microservice Quality Assurance online
2020-08-26…28 Agile Beyond IT
The BRAND New Work Experience – Agile Employer Branding @ REWE digital Mercure Hotel MOA Berlin online
2020-07-21…23 SAEC-Days
From Monolith Testing to Microservice Quality Assurance (extended), Exploratory Testing Workshop & Intensive Coaching Session with Georg Haupt NH München Ost Konferenzcenter online
2020-03-26…28 Greach Conference COVID-19
Spock Testing Workshop with Leonard Brünings & Geb Testing Workshop with Marcin Erdmann Google for Startups Campus, Madrid
2020-03-17…19 JavaLand COVID-19
Team-Driven Microservice Quality Assurance Phantasialand Brühl

2019

2019-11-06 Agile Testing Days
Team-Driven Microservice Quality Assurance Dorint Sanssouci Potsdam
2019-07-02 Agile Testing
Team-Driven Microservice Quality Assurance NH München Ost Konferenzcenter
2019-05-28 Gr8Conf
How to Build a Test Library for a Microservices-Based Web Application with Geb & Spock IT University of Copenhagen
2019-05-22 REWE digital Köln Meetup
Vertrauen ist gut besser, Kontrolle ist besser schädlich & Team-getriebene Microservice-Qualitätssicherung REWE Digital GmbH, Cologne
2019-05-16 Hands-On Conference: Agiles Arbeiten trifft auf Softwareentwicklung
Team-getriebene Microservice-Qualitätssicherung tarent solutions GmbH, Rochusstraße, Bonn
2019-03-30 Greach Conference
How to Build a Test Library for a Microservices-Based Web Application with Geb & Spock Video (full talk) Teatro Luchana, Madrid
2019-06-06…07 German Testing Day
Vertrauen ist gut besser, Kontrolle ist besser schädlich & Team-Driven Microservice Quality Assurance Kap Europa, Frankfurt am Main
2019-01-15 Agile QA Cologne Meetup
Team-Driven Microservice Quality Assurance REWE Digital GmbH, Cologne

Talks

Level Up Your Testing Toolkit!

Have you ever heard about property-based testing? Do you know about mutation testing? Are you familiar with approval testing? What is your opinion about fuzzing?

Many terms, techniques, and tools have come up since the invention of unit testing. But who has time to look into all of them?

Thankfully, over the last few years, we were fortunate enough to have that time. In this talk, we want to share the insights we gained with you. Hence, we will lead you through each of the terms, explain their most important characteristics, and in which cases they hold value.

This may not make you an expert, but gives you enough of an impression, to judge for yourself if a topic is worth further investigation.

Key Learnings

  • Learn about new and specialized testing terms, techniques, and tools.
  • Be able to judge if they are applicable and valuable to your specific situation.
  • Be aware of potential alternative solutions for classic testing.

Material

Slides Example Code

How (Not) to Measure Quality

As software developers, my team and I were repeatedly in the situation of fighting for more quality and less feature pressure. In doing so, we often achieved concessions, but often the time we were given was insufficient and the result correspondingly unsatisfactory. If I am honest, we ourselves were not in a position to say exactly how bad the quality actually was.

Moreover, I have noticed time and again that developers, users and managers of software have fundamentally different priorities when it comes to quality. Developers often think about internal aspects like code quality and maintainability, users think about external features like bugs and usability, and managers need predictability and efficiency in the development process.

Of course, there are various metrics that try to measure the quality(s) of software, but these often only refer to very small sub-aspects and have more or less harmful side effects.

In my talk I would like to point out the side effects of quality measurements and show a method for finding metrics that work better. I will point out weaknesses of single classical quality metrics and suggest better suited alternatives. In the end, this results in a network in which each metric is justified by a clear goal and a concrete question and in which weaknesses are mutually balanced.

Key Learnings

  • Identify different approaches on how (not) to measure quality.
  • Assess commonly used quality metrics against different purposes.
  • Be aware of possible side effects of measurements.
  • Understand how metrics can be combined to even out each other's weaknesses.

Material

Slides

Writing Tests Like Shakespeare

Automated tests—especially UI tests—often lack readability. They have either a very fine-grained description of the performed actions, or a too sophisticated abstraction which leaves the reader guessing or digging deep into the code base. This becomes a serious problem when we try to understand the context of a test failure. A NoSuchElementException only tells us which element was missing, not why we expected it to be there at the specific point in our test.

Another common issue in complex tests is code duplication. Either we just copy and paste long sequences of dull commands, or we forget to use the functions and utils we hide. This makes maintenance a very frustrating experience. Finally, such code is often not fit for being shared among teams working on the same product, or for being reused in different tests.

The Screenplay pattern offers a user-centric way of writing tests which abstracts fine-grained interactions into simple non-technical tasks and questions. These make it easy to introduce just the right level of abstraction for the use case at hand. The resulting test code is almost plain natural language with only a few extra “cody” characters, but I assertThat(averageReader.does(understandThis())).isTrue(). As every failure is happening in the context of performing a task or answering a question, understanding failures also automatically becomes a much easier endeavor. Sharing Screenplay code between tests or even among teams is pretty easy as the tasks and questions are implemented as simple immutable objects. This also makes it easy to implement Screenplays in any preferred language and framework.

So if this sounds like a good idea to you, come to my talk and learn how to write tests like Shakespeare.

Key Learnings

  • Learn what the Screenplay pattern is and how it can help you to write better tests.
  • Get to know the key concepts of Screenplay to implement your own or use an existing framework supporting it.
  • Discover how object-oriented design can help to make test code less cumbersome.
  • Find out about a way to keep your tests concise and readable, while using all the Selenium tweaks and tricks to keep them fast and reliable.
  • See how Screenplays allow you to write tests that use different media –like web, email or APIs–, a surprisingly easy experience.

Material

Slides, Example Code, Shakespeare Framework

Fantastic Biases & Where to Find Them in Software Development with João Proença

Why did all our test cases fail because of this simple bug? Nobody tried that out before? How did five people agree to implement this terrible feature? Why are our estimates always so far off?

There are many possible answers to those questions and none of them will be the whole truth. However, certain common cognitive biases might play a main role in all the events leading to those questions. We all have them.

They help us to think faster, but they also make us less rational than we think we are. They hinder our best judgement! In this talk I'll demonstrate some of the most severe biases, explain their background, point out how they typically influence our professional decisions, and suggest some strategies to mitigate their effect.

Being able to recognize and overcome biases in us and others is a long, challenging road for anyone – you won’t be able to do that journey with this talk alone, but you’ll certainly take your first step!

Key Learnings

  • Understand what cognitive biases are.
  • Acknowledge that you are biased – like everyone else is as well.
  • Get to know some of the most severe biases.
  • Learn about some mitigation strategies.

Material

Slides

From Monolith Testing to Microservice Quality Assurance

When REWE digital started to sell groceries online, we launched with a massive monolithic piece of software developed in only six months by a software agency.

Right after launch we started to build up our own software teams to take over further development, but we had a hard time developing new features without breaking existing functionality…

…today the monolith is still in place, but most of its functionality has been replaced by microservices which are communicating via asynchronous messaging and deliver their own frontends.

In this session we will talk about challenges we faced over the past three years:

  • optimizing the monolith's architecture for faster feature development
  • breaking it apart into microservices
  • adjusting the QA strategy from a single deployment release process to 40 teams deploying their services whenever they want to
  • developing new types of testing for microservices and micro-frontends
  • solving problems with testing asynchronously-communicating microservices
  • organizing QA in a rapidly growing company

Material

new English slides, old English slides, old German slides, Video from REWE digital Meetup Ilmenau (German)

Team-Driven Microservice Quality Assurance

While the Microservice architectural style has a lot of benefits, it makes certain QA practices impractical: there is no big release candidate that can be tested before put to production, no single log file to look into for root cause analysis and no single team to assign found bugs to. Instead, there are deployments happening during test runs, as many log files as there are microservices, and many teams to mess with the product.

At REWE digital we took a strictly team-driven QA approach. Our teams tried a lot of good and bad ideas to QA our microservice ecosystem. This involves automated testing, but also monitoring, logging and alerting practices.

In this talk I will present some of the best of those ideas, like testing microservices in isolation including UI tests, posting deployment events to a chat room, add team names to log lines or team-driven monitoring on service metrics.

Also, I will talk about some ideas that failed for us, like building a comprehensive test suit for the overall product or a company-wide QA guild.

Material

new slides, old Slides

How to Build a Test Library for a Microservices-Based Web Application with Geb & Spock

At REWE digital we are building & maintaining a Microservice based e-commerce web application. Our service teams work quite autonomous & are responsible for their own services' quality. They decide which measures are appropriate & efficient in order ensure no bugs in production. Many have a reasonable code coverage via unit tests, a good lot of service tests –including UI tests– & a sufficient monitoring & alerting system.

However, several teams felt the need for a more integrated testing of the whole system to prevent CSS clashes, errors due to interface changes or eventual inconsistency disasters & many many unforeseen issues.

To support these teams, we decided turn our old retired comprehensive test suite into a test library to enable teams to write their own system tests without the need to implement every stupid step in every team.

In this talk I'd like to present our lessons learned & developed design patterns from implementing such a test library with Geb & Spock.

Material

Slides, Video from Greach 2019

Vertrauen ist gut besser, Kontrolle ist besser schädlich (Pecha Kucha)

Klassisch werden Software-Projekte häufig mit separierten Entwicklungs- und Test-Teams durchgeführt, während in der agilen Softwareentwicklung in der Regel Tester und Entwickler in einem Team zusammengefasst werden. Letzteres fühlt sich für mich persönlich sehr viel besser an, aber warum ist das so?

In diesem Vortrag gehe ich auf verschiedene Aspekte ein, die aus meiner Sicht die Verwendung eines separaten Test-Teams regelrecht umständlich und sogar kontraproduktiv ist und was den agilen Ansatz so viel erfolgreicher und sinnvoller macht.

Material

Folien

Workshops

How to Untangle Your Spaghetti Test Code with Christian Baumann

In many teams we worked in, test code was treated much less carefully than production code. It was expected to just work. Mindless copy and paste of setup code from one test case to another was never seen problematic, duplications widely accepted, and things were named randomly. This always leads to problems: gaps in assertions become pretty non-obvious; consolidating long-running test suites becomes a cumbersome task; magic numbers need to be changed all across the suite, when they become outdated.

All of this affects the overall maintainability of our code base. Over the years we identified several good practices to prevent these problems and keep test code maintainable. Some borrowed from general good code quality standards, some specific for test code.

In this workshop, we are going to briefly discuss the properties of good test code. Then we’ll present our good practices and let you apply these to a prepared test suite. Lastly you will discuss action items in your day job.

Key Learnings

  • Learn code quality criteria that apply to test code.
  • Recognize anti-patterns in your test code.
  • Apply some simple good practices that help to keep your test code maintainable.
  • Take away concrete action items for your day job.

Material

Code, Slides, Cheat Sheet

Let's Get Into Coding with Stefan Scheidt

Coding is often seen as a kind of superpower. Only the “chosen ones” are able to practice this art. That’s wrong! Coding can be learned by anyone! In fact, a lot of developers learned coding on their own. Most of them started with just some initial knowledge and a motivation to make the machine do something they wanted.

In this workshop we aim to give you that experience. We will provide you with that initial knowledge and setup to get you going and let your own motivation do the rest, step by step:

We will provide you with a prepared project, give an introduction to its structure and make sure that everyone is able to work on it. Next you will alter existing functionality to get into the programming language. In the end, you will get the chance to build a completely new feature into the script.

These exercises aim to get you hooked onto coding. To keep you going after the workshop, we also offer you to stay connected as a community of learners. For that we will set up a Slack channel you can join for getting and (eventually) giving support.

Key Learnings

  • Experience the power of coding and how it can help you with your daily work.
  • Learn the basics of a scripting language to get your coding journey started.
  • Create your very first self-built tool custom-fit to your recurring tasks.
  • Join a community of fellow learners to keep you going.

Meet your own Biases with João Proença

You’ve certainly heard that word before: “bias”. Today, a lot of controversial topics surround that word and for a good reason. After all, bias is at the core of a lot of discrimination and prejudice issues in our world.

However, did you know there are many types of biases that influence our judgement every day and are not related with discrimination?

For instance, have you heard of Loss Aversion? It states that humans experience losing something much more intensely than they do when acquiring it. It really affects our judgement, for instance, when you are contemplating on the idea of deleting an automated test!

Maybe the Gambler’s fallacy influences the way you handle flaky tests? Or perhaps the Spotlight Effect blocks you from driving changes in your organization?

In this workshop, we want you to experience some of these cognitive biases first-hand! After all, acknowledging that our behavior, as human beings, is impacted by these factors is the first step in learning how to improve our rational judgement. We’re also going to try to relate these behaviors with our professional lives. Maybe you can even come up with your own ideas on how cognitive biases hinder our abilities as testers and engineers.

Let’s learn together! So join us and, please, bring your cognitive biases with you!

Key Learnings

  • Experience for yourself some cognitive biases that affect our day-to-day rational judgement.
  • Understand how cognitive biases are connected to some of our behaviors as professionals.
  • Learn about materials you can follow up on if you’re interested in knowing more about cognitive biases.

Exploratory Testing Workshop

In this workshop I explain the basics of exploratory testing.

By taking this workshop, you will learn what exploratory testing is and how it might be useful to you. In the exercises you will either explore your own product or a similar commonly known one in order to deepen you understanding of the principles of exploration.

Material

Slides

Spock Testing Workshop

This workshop is about the Groovy-based testing framework Spock.

By taking this workshop, you will learn how to add the framework to an existing JVM project and its benefits for your testing.

Material

Code

Geb Testing Workshop

This workshop is about the Groovy and Selenium-based web testing framework Geb.

By thanking this workshop, you will learn how to create readable, semantic and maintainable tests for an existing website or application.

Material

Code

Groovy Workshop for Java™ Developers

This workshop introduces the Groovy language as an alternative to Java™.

Material

Code

Resources

Elsewhere

Impressum

Technically Responsible (§ 5 TMG)

Address
Michael Kutz
Wellenstraße 53
53721 Siegburg
Telephone
02241/2409657
Email
mail@michael-kutz.de

Responsible for Content (§ 55 Abs. 2 RStV)

Address
Michael Kutz
Wellenstraße 53
53721 Siegburg