If you asked anyone on my team what agile practice is most responsible for our success over the past eight years, I bet they'd say retrospectives. But I wonder if it's not so much the retrospectives themselves, as the "small experiments" (to borrow Linda Rising's term) we perform to try to address our problem areas.
If you asked anyone on my team what agile practice is most responsible for our success over the past eight years, I bet they'd say retrospectives. At the start of every two-week sprint, we spend time talking about the previous sprint, identifying areas that need improvement, and thinking of ways to overcome obstacles. But I wonder if it's not so much the retrospectives themselves, as the "small experiments" (to borrow Linda Rising's term) we perform to try to address our problem areas.
Here's a recent example. Our product owner is awesome, but like many POs, he has many responsibilities and not enough time. Years ago, he came up with the idea of story checklists. Before each iteration, he prepared a checklist for each user story, following a template that included information such as mock-ups for new UI pages or reports, whether a new story affected existing reports or documentation, whether third parties needed to be involved, and high-level test cases. This helped us get off to a running start with each story.
As our PO was burdened with more responsibilities, he started to run late on preparing the story checklists. The downward slide started slowly. At our sprint planning, he'd say, "Oh, I am still working on the checklist for this one story, but I'll have it ready soon." Or, "I'm waiting to hear from the head of sales to get the final requirements for this, I'll let you know as soon as I know." We're agile, we're flexible, we have a lot of domain knowledge, so we felt we could cope.
But the one missing story checklist soon turned into two, then three—after awhile, we weren't getting any story checklists, ever. We discussed each story with our product owner at our sprint planning meetings and wrote requirements and high level tests on the whiteboard, but that whiteboard also had outstanding questions for each story. We'd start working on the story with the best information we had, but then there would be changes. We spent a lot of time going back and forth to look for the PO, ask questions, and update the requirements as they changed or were finalized. We still got our stories done, but it was costing the company more, and slowing us down.
The PO had no motivation to reverse this change. It wasn't even his fault, he was usually waiting for other people. We were still finishing the stories. But we could have done more work if we could have saved the time for all the back-and-forth over requirements.
Our frustration mounted. Finally, at a retrospective, we decided we had to do something about this problem. The company was spending extra money to finish each story, simply because the business people were not getting their ducks in a row before each iteration began. We decided to try an experiment.
We had recently begun to use a product called MercuryApp to record our feelings about the progress of the sprint every day. (That is another experiment, a way to keep better track of how things go so our retrospectives can be more productive, but that's the subject for a future blog post). This product lets you rate your feelings on a 5 point scale from a very sad face to a very happy face. This gave us an idea. At the end of our sprint planning meeting, we put a "rating face" next to each story on the whiteboard. If we didn't have any requirements, we put a very sad face. If we had all the requirements we needed to complete the story, we put a very happy face. Most were somewhere in between—a sort of sad or happy face, or a "meh" face.
The second (and possibly more powerful) part of our experiment involved pushing back on the business. We told the product owner that any stories that didn't have requirements by the second day of the sprint would be taken off of our task board and not done until the following sprint.
We neglected to specify the time on the second day of the sprint by which we needed all requirements, and our PO delivered some at 11 p.m. But he got them to us for all the stories! This was a great result.
At our next retrospective, we went through each story on the whiteboard, and talked about how we felt about each one now. Interestingly, some stories that had a sad face ended up going well, and some with a happy face turned out to be trickier than we had thought. This gave us a better understanding of what we really need to know about each story before we start working on it.
We couldn't do anything directly about our PO being overworked, or about the business people failing to provide information. But we could try this experiment to make our lack of requirements visible, and push back on the business to say, "We aren't wasting our time and yours if we don't have requirements at the start of the sprint." If this experiment hadn't helped, we'd have tried another one.
Have your retrospectives—but do take Linda Rising's advice, and try small experiments. I am betting you will find unexpected ways to improve how you work.
Fantastic post Lisa! This is one of the best examples I've seen of how a great team solves a problem. I have often seen situations similar to this and often the result is blame to the person who isn't delivering what the team needs. I see this experiment as a system change to help get your team the outcome they needed to finish the work. I usually observe "it HAS TO get done..." without any real change to influence different behaviour. Thanks for sharing this story!
It's easy to fall into that blame game, we try hard to focus on the issues and think of experiments to try to address them. Thanks, I am glad you liked the post!
I am always looking for ways to improve our retrospectives and having the team thing about how they feel about each story is a very interesting idea. Thank you!
The team I work with as a Scrum Master is the only team on the project that has never missed a retrospective, they have even decided to ensure they happen even when I cannot be there, which means they value its importance to the process and the team's success.
Throughout the retrospective process this team has used the meeting in many different ways and I have a tendency to let the mood of the team direct whether it is meant to be a progress improvement meeting, a true review of how things went or just a down and dirty venting sessions... I have learned that with this team (together for 2.5 years now) that sometimes bonding over a sprint's challenges in gripe session improves team performance in the next sprint more than trying to figure out what to fix and how.
This team has defined, refined and thrown out a lot of process over the past 2.5 years and we continue to do so as the needs of the project change through different cycles of development. It has helped make us one of the most high performing teams in the project even when randomization strikes.
That's a good point, sometimes people just need to vent. It might clear their mind and help them let go of the emotional issues and think of new ways to improve. It's good to hear about other teams who find a lot of value in retrospectives.
Have you tried performing a Pre-Mortem right after your planning meeting. A Pre-Mortem will help predict failure and combat the Go-Fever of the business. Here is a presentation on Pre-Mortems https://www.haikudeck.com/p/8qQ1cTUbEX/pre-mortems. I will be presenting this at the PNSQC conference in Portland, OR Oct 13th-14th.