Unleashing the Power of Sprint Value Scores: the way to frequent and objective feedback

In Agile teams collaboration between business and development is crucial for the creation of valuable products. In service companies like ours, teams are even more challenged to gain an in-depth understanding of what the specific components that each different client finds most valuable. Businesses are different, so are expectations. Being committed to understand our clients’ needs and help them succeed, we continuously come up with new ideas on how to facilitate the communication in our distributed teams to help them get the most of this collaboration, set the right expectations and get early feedback. One recently improved practice that looks promising so far is the Sprint Scores.


What are Sprint Scores?

Sprint scores are a way to facilitate retrospection on previous iterations. it is basically the question we ask each other: on a scale from 1 to N, how would you rate the sprint? Why you give it so?

Sprint Scores are an excellent conversation starter for any retrospective activity, not only opening a safe environment for people to express their general satisfaction and identify pain points, good practice and happiness factors, but also a good and non-intrusive way to check and align team members’ expectations. Having each team member give feedback on the value they see (or don’t) created during the sprint reveals plenty of insights. It also helps bridge any gaps there might be between different functions (QA, development, UX, PO, etc).

As an outsourcing company, one of the main challenges we face is getting a better understanding of each side’s needs between us and our clients. For long we’ve been asking POs to “rate the sprint” to gather insights on their overall satisfaction early on, as they often are our most direct contact with the client. Although this idea had potential, it rarely brought any meaningful insights. The POs would give a random number, not really understanding what is expected of them to rate, saying that overall they are happy with our work. The lesson we needed came with us gaining more experience as a product company. Scores were not enough anymore. The numbers alone started to mean even less. What mattered was how aligned we were around our goals. We discovered that crafting guidelines or components for assigning scores helps. Soon we observed the conversation naturally shifting from the scores to the success criteria. Discussions emerged around what we value as a team. Different teams chose different scoring systems. Here are 2 examples:

THE SMALL PRINT

Guidelines shouldn’t be prescriptions. They should only guide you. Everyone can include their own touch to the scoring. In fact, it’s better if we do – this will bring more insights. Also, we can expect the components to be shifting and changing as the team and product mature. A major drawback of the guidelines is that we risk becoming so “objective” that we forget to say what’s really on our minds. So consume responsibly.

Regardless of the approach, conversations helped both POs and engineers realize certain things. What the cost of debt is. It’s hard to have 100% story completion (0 carryover) if the stories aren’t well groomed or split. It’s not that bad to miss a story as long as you learn. It is essential to communicate progress right and clear, etc.

“We don’t need an accurate document. We need a shared understanding.”

~ Jeff Patton

This post was written by Denitsa Stoycheva