Small Talk with Gojko Adzic (Transcript)
This blog post is the transcription of the chat we had with Gojko Adzic about his Specification By Example workshop on 12 December 2022.
The conversation has been slightly edited to better fit the written format. Enjoy!
Avanscoperta: What is specification by example?
Gojko: Specification by example is a collaborative approach to requirements and tests. It's by far the fastest way I know to get people from different roles, testers, developers, analysts, and product owners to agree on what they want to build and how they want to test it.
Specification by example works really well in short iterative delivery cycles. And it's probably one of the few things that does work well for requirements and testing in short iterative cycles because it is fast. Usually when people do a two-week development cycle, they don't have a lot of time to do proper analysis, to do all the stuff they need to do individually to figure out what they want to build, to transfer that knowledge to people who need to build it, then to figure out how they want to test it.
Specification by example makes that process work really well. As a technique, it’s usually done in a kind of Scrum or Kanban-style process.
It's a cornerstone of basic iterative software delivery quality, and it's also used within the context of behavior-driven development quite a lot. People come to Specification by example from that perspective as well.
And it's essentially a way for different roles to define and do collaborative analysis together that results in some very concrete examples of intended system usage and helps then create both a shared understanding of what needs to be built as well as a really good starting point for the definition of the acceptance criteria and test cases, and at a later stage documentation for improving the system.
Avanscoperta: We have a comment by Chris Young - thanks for that: “Specification by example works well in DevOps too, where you need to do things fast, but often the requirements are a quick conversation.”
Gojko: And generally it works well anytime you have multiple roles and people generally used to different types of sources of truth or types of information, and you need to communicate across the roles quickly - here’s when examples tend to be a really good way of doing that.
In psychology, this is called a boundary object - something you can put between two different people that they can all use as a way of bouncing ideas off each other and clarifying things.
Generally, we use examples in everyday communication anyway to clarify things. If you Google for the phrase “for example”, you're going to find hundreds of millions of results, and specification by example just takes that idea and uses it to build a slightly more formal way of getting an agreement or flushing out disagreements very quickly.
DevOps is a nice example of where you have multiple different roles, typically somebody doing operations work and development work together, but they have different types of stakeholders that are developers or business stakeholders or testers, and figuring out exactly what needs to happen there - this is usually a communication challenge that needs to cross different roles. Examples are a good way of doing that.
Avanscoperta: How does specification by example happen in practice? What tools do you use and how does it actually happen?
Gojko: There are several aspects to specification by example. One key aspect is to do analysis collaboratively, and there are lots of different ways of doing that depending on the size of the group that needs to be involved and whether they are co-located or remote and things like that.
Let’s talk about the group size:
1) if you have a very small group, I'd say three or four people, you don't have to have a lot of formality in how these conversations happen, as long as people know how to use examples and bounce them off each other.
Usually, this pattern is called the Three Amigos.
It used to be one developer, one tester, and one analyst when we started because analysts were still a thing. And today most organizations have renamed those analysts to product owners and they have not given these people the skills to do product ownership, so they still do analysis.
So you have these three different roles usually, and they use examples. Somebody who represents the business, the analyst or the product owner, comes with some initial examples. Testers and developers try to complain about that and try to think about additional things that would be important from a development or a testing perspective, and they just try to agree on these examples.
2) When you have a slightly larger group, you need to get things a bit more formal with, say, five to 10-15 people, I would suggest doing feedback exercises.
Feedback exercises are a great way to check for shared understanding or misunderstanding. This comes from a book called The Sources of Power by Gary Klein, and it was part of his recommendation on how to clarify understanding in the US Military as part of the Commander's intent template.
A feedback exercise is based on the idea that
most misunderstanding comes around boundaries and edge cases and that people will usually agree on a happy day scenario.
Again, somebody who represents a business, a product owner, an analyst, or somebody who's done a bit of upfront analysis will explain something and present a few initial cases with some happy day scenarios.
And then people get to think about boundaries or edge cases, but not really talk out loud about what the expected outcome is in these cases. They list these things and then somebody who facilitates the workshop individually announces each case, explains it a bit, and then everybody has a minute or two to think about the expected outcome and write it down on a piece of paper individually.
If you're doing a remote session, think about it and then paste it into the Zoom chat at the same time, or reveal it on a Miro board.
The idea with that is that everybody has to individually think about a difficult edge case and explain their understanding. And because we use concrete examples, they have to really come up with a concrete outcome. Then the facilitator can very easily see if everybody has a shared understanding or if we have differences in understanding.
If somebody has different outcomes from the rest of the group, then they understood the case differently. And that often opens up a really good discussion on why we missed this. Did we have a wrong assumption? Does that person have some deeper knowledge that other people don't have? And so on.
3) This kind of facilitation works really well for groups of five to 10-15 people. Above that, I tend to use Diverge and Merge style workshops where we get a larger group split into several smaller groups.
In the first part, that's the Diverge part, each individual group works on a whiteboard, a flip chart or a virtual board, where they document their understanding of a piece of requirement or a task with some concrete examples, usually for ten or 15 minutes. They write down how they understand it, they try to get into boundaries and write down the expected outcomes in the boundaries.
Then in the Merge part, the facilitator compares what different groups have written down.
This is an amazingly quick way for a very large group of people to visualize their understanding. I've done it with 30-40 people in the room, and virtually We can do it with even more people very easily.
This exercise is useful for different groups who visualize their understanding. Similar to the feedback exercises, we look for differences, we look for the same or similar examples with different outcomes, we look for differences in structure, we look for question marks, and this helps very quickly spot differences in understanding, misunderstandings, and different opinions.
A single facilitator can coordinate good discussions and make sure that everybody's on the same page.
So those would be the three key ways of doing things, depending on your group size.
4) There is a fourth way I try to avoid, if possible, described in the Specification by example book, and it’s called Write and Review. And that's when one person writes their understanding of something with examples and sends it to everybody else to complain and review.
I don't like that one because it doesn't support collaborative analysis.
I think, based on my experience and on what Larman wrote in the book Practices for Scaling Lean and Agile, collaborative analysis is one of the big differentiators between successful iterative teams and less successful Iterative teams.
Getting one person to write things down and then get everybody else to review these things is not really collaborative analysis. It's a chance for feedback.
But this fourth option exists, and it’s particularly useful if a team is very highly distributed and time zones don't allow you to meet at all. Doing something more collaborative is actually a lot better.
Avanscoperta: Can’t wait to see these in action during our workshop together…
Gojko: During the workshop, we go through all these kinds of practical ways of getting to the examples quickly.
The key points of specification by example are that it
- helps us get to alignment quickly
- flushes out misunderstandings
- shows problematic, wrong assumptions
- helps us spot what people did not understand well or understood differently.
And the key thing there is to do that quickly. Facilitation techniques for getting to the right examples quickly are amazingly important and we cover them in depth in the workshop.
Avanscoperta: First question from the audience. Bruno asks: “I understand that the right place for this exercise, in a scrum context, would be refinement and planning. Do you have any pitfalls to share? Things that work and don’t work?”
I think this question arrived around the time we were discussing the first or second type of exercise (see above).
Gojko: In a sense, Bruno is right - doing collaborative analysis is a faster way to do refinement. And planning has two parts, namely figuring out what we do and prioritisation. This type of activity we’re carrying out with specification by example doesn't really help with prioritization, but it happens with the first part, understanding what we do.
This practice has been around for probably 30 or 40 years in different variants, and I think that, as a community, we've kind of codified it pretty well about 15 years ago.
There are lots of ways of getting this wrong, and here's the main three:
1) One way of getting this wrong is for one person to do all the analysis and just present examples and everybody else disconnects because they think all the analysis is already done.
And again, that's why I like doing analysis collaboratively to make sure that we flush out people's misunderstanding.
2) Another pitfall happens when people don't really look at the boundaries, they just look at a few happy day scenarios, and they codify that with examples.
The reason why this is bad is most of the misunderstanding actually comes from boundaries and comes at boundaries of knowledge, at edge cases. Getting to the problematic edge cases is essential, but very few people have good skills to get to these boundaries.
Again, that's why it's important to mix developers, testers and analysts, because testers will be usually better equipped to come up with boundaries, but product people will know how to set the context, and developers will know how the existing system works or how similar problems can emerge and then propose their perspective.
3) The third kind of typical pitfall is to let the tool drive the types of examples people can connect and express.
And that's why I very much prefer doing this either with whiteboards or flip charts or, if I'm doing this in a remote setting, using some kind of open board type like Miro instead of starting with Excel or Word, because… if you start with Excel, you're 100% guaranteed to end up with a table, and if you start with Word, you're very likely to end up with bullet points and nested bullet points because that's easy to do.
But maybe a drawing would be a better way to capture an example for a flow, or maybe a mix of sentences and tables would be better to use. Having some free-form boarding tool, or having pens and a physical board, is letting people structure examples in the best way for that specific scenario instead of being constrained by a tool.
The worst possible offense for something like that is to use Jira and start collecting examples directly in Jira, I think, because Jira even doesn't even have a decent editor for any of these things. Then people get stuck in formatting issues, in how to structure it…
Best to start with something that's free-form and then refine it later, and separate out collaborative analysis from refinement - these are the two key things for doing this right.
Avanscoperta: With Miro, if you’re a remote team, you can do a lot of things in this sense.
Gojko: I still like whiteboards and flip charts, but in remote times, not a lot of people have whiteboards and flipcharts at home, and that's the problem. Digital tools like Miro and Mural are good as long as people don't get stuck too much on the tooling aspect of it.
When the tool starts shaping your discussion and preventing you from expressing things you want to express, like drawing a diagram, using a wireframe or using a bit of tabular data, then it's an obstacle rather than a facilitator.
Avanscoperta: Specification by example is also a book that was published in 2011. How did it all start?
Gojko: I guess Specification by example, as a practice and in different shapes and forms, is probably 30 or 40 years old.
David Parnas used concrete tables to specify things already probably in the 70es or early 80s.
Ward Cunningham used what later became Specification by example while working on the WyCash portfolio management system in the early 90es.
In the book Exploring Requirements, Donald Gause and Gerald Weinberg talked about using test cases to flush out misunderstanding about requirements and getting people to complain on concrete test cases early on while reviewing requirements, which is one of the bases of Specification by Example, and the book was published in 1989.
Generally, I think the boost for Specification by Example came from Ward's work and Rick Magridge's work on Fit (Framework for Integrated Tests) in the early 2000s.
Kent Beck's idea of customer tests in extreme programming also had an impact. Kent talked about Test-Driven Development from two perspectives. One was the perspective of developer tests, test-driven technical design tests that would help us figure out the design correctly, and he also talked about customer tests, something that would help us do Test-Driven Development on business requirements.
Lots of people were really struggling with the implementation of that because XP customers, product owners and business analysts were not really able to read the code. Codifying these tests was a challenge.
The work that Ward was doing around that time with a tool called Fit helped again bridge that gap because he was using things that emerged from the WyCash Plus project, which was a financial planning tool, and they were using spreadsheets as examples and Fit was enabling people to use human-readable tables in HTML documents as customer tests that would be fully automated.
A bunch of tools emerged later from Fit thanks to the work of Uncle Bob Martin and Micah Martin. Concordion, created by David Peterson, as well as many other similar tools, like JBhave by Dan North and Liz Keogh - these were all created around the same time.
This is where given-when-then emerged and Cucumber came a bit later.
There was a lot of innovation in the early 2000s and mid-2000s when the community was trying to figure out how do we actually implement this idea of customer tests? How do we create something that is automatable but still developers, testers, analysts, and customers can understand it the same way?
And the book Specification by example came out of my obsession with how effective this is as a practice, and my wish that more people learn about that and use it in their process.
I saw how powerful this was in the mid-2000s on several projects, and then I realized how important this is as part of an iterative process. Back then, this wasn't really a well-known practice, so I was kind of trying to get people to learn more about that.
And Specification by example is actually my third book on that topic. I published Test Driven .NET Development With FitNesse in 2007, and then Bridging the Communication Gap came out in 2009.
When I published Bridging the Communication Gap, this book explained my experiences with the process, but I wanted to document the fact that other people were doing that as well.
The book Specification by example in fact is the result of a wider industry research that documented 50 really good case studies and dealt with how people were doing it in banks, insurance companies, large travel booking sites and things like that.
I wanted to show people that specification by example is not some kind of black magic weird thing that only small teams can do but how large organisations have already implemented that in practice.
By the time the Specification by example book came out, I think we crossed from early adopters to folks for whom this was becoming an established practice. There were some really good ways of adopting this for lots of different types of organizations.
And as you said, this was 12 or 13 years ago. So now this is a well-documented, well-established practice.
Avanscoperta: Are there any case studies contained in the book you want to tell us more about?
Gojko: It's difficult to single out some specific case study. Usually the effect of doing Specification by example for organizations where people really have this problem, such as delays in delivery, misunderstandings, rework caused by misunderstandings or with bugs. So the goal is to significantly reduce the cycle time from when somebody says, “Look, this is what we should do”, until it goes live and doesn't come back.
I worked with a logistics company a few years ago, helping them adopt specification by example, and their average cycle time was four or five iterations to get something out because usually, they would develop something, it goes live, then the customers complain, then it goes back, then it goes to more analysis. It goes live again, people complain again, then kind of they fix it again, it goes live.
And they've reduced that to between one and two cycles in average, meaning that specification by example helps them reduce their cycle time by two or three times.
Lots of people I have interviewed for the book have reported much, much higher quality of their work. They've reported much fewer bugs.
There was a fantastic case study in the book about a team where Steve Freeman worked at the time in a large investment bank in London. And because it's Steve Freeman, he's a genius, and he worked with some other very smart people there, they had this critically important system for an investment bank where they had zero bugs in production for five years.
And I think that is a beautiful thing to do - while still releasing every two weeks. And this was 15 years ago when people were still claiming that you can't do two-week iterations in banks.
Avanscoperta: One last question from Dario for today. “Can the specification by example approach also be used to facilitate conversations with end users to gather requirements, strengths and weaknesses?”
Gojko: Can specification by example be used with end users? Absolutely. There's nothing magical in examples that only people internal to the team would understand. External users also can complain about these examples or propose their own examples.
The strength of specification by example is that you can cross different roles very easily and you can get people from different roles to understand things and talk about this.
The weakness is that it can be used on things that are relatively deterministic and it can be used for stuff where there is an expected outcome.
For example, video games are an area where you can't really use this, because the key quality of a video game is for it to be fun. It doesn't have like an expected outcome for a specific scenario or something like that. And even if it does, and even if there are bugs in video games and they are fun, people will keep playing them. But if you hit all the business requirements of a video game, but it's not fun to play, it's still going to be a failure.
So things like fun are difficult to describe with examples, but stuff that is deterministic, stuff where there is an expected outcome or things where you can get people to complain, can be described with examples nicely.
So any business calculation rules, things like usability, performance, wireframes, pretty much anything that is deterministic, you can describe very nicely with examples.
Avanscoperta: My last question will be about the remote version of the workshop Specification by example, a 5-module remote experience. How is it going to work?
Gojko: There will be a nice mix of theory and practice. Each day has a specific theme.
1) On the first day, we look at how to get misunderstandings flushed out quickly and get a shared understanding with examples.
2) On the second day, we look at facilitation techniques for these collaborative analysis workshops.
3) On the third day, we usually look at given-when-then as a format for doing these things.
4) On the fourth day, we look at example mapping and refinements.
5) And on the last day, we look at how that fits into the wider process.
So, within each day, there's a main theme, but we split each day into three sections.
Each section starts with a very quick presentation from me. Then, a practical hands-on exercise where people try to use these techniques, followed by a discussion session where we look at the results of the exercises, talk about the differences, get people to really understand these things the right way and make sure that they have the knowledge and skills to take it to their process.
So the workshop is very practical. People spend a lot of time doing these things from different perspectives, learning lots of facilitation techniques and lots of interesting ideas they can take back to their companies to improve the process.
Learn with Gojko Adzic
Gojko is the trainer of the Specification by example Workshop (Milan, 19-20 March 2024).
Check out the full list of our upcoming training courses: Avanscoperta Workshops.
Subscribe to our Newsletter 📩 (available in Italian and English) and be the first to know when we schedule a new workshop or we publish a new workshop.