If you ask people to build features, they will, and they will value the delivery of those features, even if delivery doesn’t create big picture success. But if you ask people to be
for success, you are asking them to work in a new way — Gothelf
Hiroshi Yamauchi was hired as the president of a little-known domestic playing card company in 1949 — he transformed it into the multi-billion dollar video game company we know today as Nintendo. They’ve been in the game well over 120 years outlasting most Fortune 500 companies. If that isn’t big picture success, I don’t know what is.
I believe century old companies might contain more wisdom that the newest management fads and their simplistic explanations of success — Ajaz Ahmad
What did Yamauchi do differently?
He enforced a strategy where all products were to be cleared through him, emphasising a strong focus on product quality.
He was a human barometer of quality.
Measuring success in the form of burn-down charts isn’t really measuring quality and favouring speed over quality shouldn’t really be an option, unless you can do both. What you’ll end up having is a fully delivered roadmap and a mediocre product.
Now having all your products clear through the CEO is quite unusual and may not be the best way to do things today, but we can still embed a strong focus on quality.
Our products need a benchmark of quality
What does this look like?
- Every sprint should have actionable learnings that lead you into the next round of product development.
- Every area of your product should have a measure of quality helping you determine what those actions are.
- Product teams should be aligned on 1-2 above (this is the most difficult to achieve because humans)
- 1–3 should be based on what’s valuable to your users
If the above is what you’re looking for, this workshop is a must-try.
If you’re interacting regularly with your users and carrying out regular user testing, you’re in a great place to start as we can use customer feedback as a metric of progress.
Confidence as a measure of quality
We use confidence as a measure of quality of a product. This means that we constantly ask ourselves “Are we confident that a product or its part successfully responds to user and business needs?” These are informed by success metrics based on those needs.
- High confidence — meets/exceeds our expectations and ready to go into development
- Medium confidence — shippable, but we can do better
- Low confidence — go back to the drawing board
The method of measuring should work regardless of what labels you use to describe quality — whether its high/medium/low, or positive/neutral/negative. What’s more important is having a ‘quality measure’ at all.
When to do this workshop?
Just after you’ve completed usability testing. We usually carry out our user interviews for any given sprint in one day and allow anyone from the team to watch the usability test live.
Who should facilitate?
- UX researchers that want to create an evidence driven culture within your team and may be struggling to see the impact of their efforts
- Designers who have had the experience of stakeholders insisting on changes you have been unable to defend and would like to visualise the impact those decisions have as evidence
- Anyone on the product team who has their hands on freshly concocted usability test recordings and is dying to make sense of all of it
Job roles aside, it helps to be curious, humble, oriented towards learning, good at collaboration and to a certain degree, comfortable with uncertainty.
This exercise is not for The Lone Ranger. The UX Unicorn. The cowboys and the gurus. Having anyone who is really attached to an idea (and themselves) will be detrimental to the exercise.
I hope it works for you as it does for us. It’s:
- Efficient
- Effective
- Collaborative
- Interactive
- Colourful
Without further ado…let’s do this.
What you will need:
- Your ‘quality measure’ — we’ll be using Low confidence, Medium confidence and High confidence.
- A facilitator
- Recordings of your usability sessions
- Your team: one team member to one user if possible. Try and get engineers, product owners/managers, stakeholders involved in this too. Knowledge sharing will be greater.
- Post-it notes: 4 colours. Each colour represents a level of confidence and the fourth to represent the area of the product being measured.
- Wall space
- Sharpies/Markers
Part 1: Silent understanding
This part can be done live while watching the usability tests.
- Write down the areas of the product that you tested and stick them on the wall giving them plenty of space between each other.
[[INSERT IMAGE HERE b7eca72decd8ee988a295a45e52d9e17.png] CAPTION="undefined"]
2. Divide the remaining 3 colours of post-it notes to each of your team. 1 colour for positive feedback (high confidence), 1 for negative feedback (low confidence), and another for neutral observations (medium confidence)
3. Assign one recording each to a member of your team to analyse. If you have 5 recordings, make sure you have 5 of your team members taking part, you’ll get through the exercise much quicker and the knowledge sharing will be greater amongst your team.
4. Explain to your team that they will be watching over the recordings and writing down the feedback on the relevant coloured post-it note. They can write direct quotes or their own observations. They must also label each post-it note with the users i.d. to minimise bias when it comes to analysis later.
You will mostly be doing this step in silence.
Part 2: All together now
- Get everyone to stick the feedback next to the area they belong to. At this point the order doesn’t matter, just get it all up there. By the end you will have something that looks like this:
2. Start grouping all post-its together by colour. Do this together to speed things up. You will almost instantly begin to see the areas in need of significant improvement vs. areas that are testing very well. This will make it easier to break it down into further patterns. You will have something that looks like this:
Part 3: Culture is conversations
Make your users an integral part of your daily conversations. Debate openly and freely. This part really allows different points of view and different skills on your team to come together and identify problems together as opposed to being given a problem to solve.
- Get talking — Begin by going through each area and discussing the levels of confidence by area. Identify trends, group them together, what problems are emerging, why you think they have surfaced and come to an understanding together on how you think you can best action them. Try to avoid designing solutions right now. If ideas and actions are arising, write them down.
- Write actions — At a glance, you’ll be able to see what areas are working well and what aren’t. Deciding on what to do next should come from your understanding of what is valuable to the user and business and which areas are causing the most friction.
- Size the actions —scope out the resources required to fix them, this will help prioritise them later.
- High confidence — If areas of high confidence emerged, congratulations. You should feel good about this. These areas are ready to go into development.
For the products. For the culture.
The confidence framework serves as our method of measuring quality which I talked about in detail in my last post. It keeps quality as a guiding principle that manifests itself in our products as well as our team.
We’ve embedded this process of measuring quality with confidence with the clients we work with, and it has worked really well. We were able to use the confidence framework to visualise the current state of a client’s product. We were able to shift priorities so their roadmap reflects actions that directly deliver customer value.
This particular exercise tends to fall on Day 6 of our product cycle.
On Day 7, we rest.