Quality assurance for agile teams


Think about your team’s process for a moment. What steps do you go through for each new unit of work? What process do you have defined? What steps are in your visual management system? Does your team follow some sort of defined standard work?

Chances are your team does have a process, defined or understood, and it is represented in some way in all the visual tools used by the team, on the story cards themselves, in the columns on the team’s Kanban wall, or in the software used to manage work in progress. Steps like:

  • Backlog
  • Analysis
  • Development
  • Testing
  • Done

In between these steps, are there gates, like Three Amigos meetings, or peer reviews? Are there steps built in to control quality, such as code review, unit testing, regression testing, or user acceptance testing?

A Team’s Quality Process

I took the time to draw up our team’s process recently, and found that we have 18 specific steps already built in to ensure quality from the initiation of the project to the release into production, including (not exhaustive):

  • JAD session with the Product Owner to ensure that the software developed meets the customer’s needs
  • Writing the user story or requirement to specify exactly what needs to be developed, be it in use case, scenario, or acceptance test form
  • Peer review of requirements to ensure that knowledge is transferred from experienced Analysts to newer Analysts, as well as a verification of the requirements document’s accuracy
  • Three Amigos session that includes Analysts, Developers, Testers, and the Product Owner to complete the requirement, and allow for all domains to understand what is to be developed
  • Paired programming between two Developers, one writing the code and the other writing unit tests
  • Test script design by the Tester, reading the requirement and understanding any boundary cases, and ensuring that the test data required is available
  • Continuously integrated automation testing to run the automated test scripts from each user story over the course of multiple Sprints until the release
  • System testing, including regression testing to ensure that the new or changed software operates within our enterprise environment and with other systems, as well as testing all unchanged software to ensure it continues to operate as expected
  • Performance testing to validate that new or changed software meets non-functional requirements for speed and capacity
  • User acceptance testing with the Product Owner to ensure that completed work meets customer expectations and is fit for use

These aren’t all the steps in the team’s process, but these are some of the ones where we are specifically trying to prevent software defects. So are we getting the value out of each step that we want? Based purely on anecdotes from team members, and some quick data, I don’t think we are—I hear three amigos aren’t happening as often as the team planned, we only have 50% unit test coverage, our regression test coverage is unknown, etc. So when the process is being followed, sometimes the activity is really just a checkmark to say we completed a step, rather than a fully invested quality action.

So we have 18 steps where we could detect a software defect, but I found that in the process we really only have three steps where a defect could be injected—business requirements, user story requirements and design, or code development/configuration/refactoring/migration/etc.

Everything the Team Does, Except Write the Code, is to Ensure Quality

Actually, when I look at the process end-to-end, the only step in my process model that isn’t included is writing code. Here is my revelation—everything the team does, from the point where someone asks for a software feature to be added or changed, up to the point where we begin development; and everything we do between that development and releasing software into production for users, is to ensure a quality product.

  1. Requirements activities to make sure we do the right thing
  2. Development of software
  3. Testing activities to make sure we did the right thing
  4. Release software to users

Measuring Software Quality

We tend to measure quality by counting defects in production, or shipped, software. Measuring software quality merely by the number of defects that are found in production, with metrics like defect backlog and defect density, generally is more about measuring the effectiveness of a team’s quality control activities (testing) than quality (defect free) work. Quality assurance is about preventing defects, by engineering a quality process and adhering to the steps to ensure mistakes don’t occur, or are caught before work is completed. Measuring all defects in completed work, both prior to release, and after the software is in production, is how we get the necessary data to improve our software development process.

In my company, we are getting closer to understanding what quality looks like. We consider a defect to be something in completed work that does not match the specified requirement; this can be before the release, or after deploying the software to users. Our definition of completed work is software that has been approved in the Sprint (by the team’s definition of “done,” which might be after the Sprint Review or “Show and Tell” meeting, or after each card has been acceptance tested by the Product Owner), and is now supposed to be “release ready.” If a defect is found in this release ready code, then it is to be recorded—it can then be added to the Product Backlog for future remediation. This is a very producer view of quality (where quality is delivering the product based on specified requirements); but our Product Owner has bought into this process and works with us to ensure requirements are accurate first, as well as capturing enhancement requests as new items in the Product Backlog, rather than as defects. The Product Owner and our team’s System Analysts work together to act as a proxy for the customer, thinking about quality from the end user’s perspective (where quality is “fit for use,” and meeting their expectations—regardless of the specified requirement).

For agile teams using Scrum, we need to be cognizant of the careful balance between prescribed processes and documentation, and focusing on collaboration and working software. I’ve written about this before—the team needs to collaborate and blend process steps, lest we start following an iterative waterfall process inside each Sprint. I advocate for the team to understand and own its own process, even draw a process model, as a new team member training tool, or to ensure common understanding. In Sprint Retrospectives, the team must discuss the process, and how to improve—undertaking small changes in each Sprint to see what works. We can measure those improvements with Scum metrics like velocity and burn up, as well as with quality metrics like defect density and overall defect backlog. We also need to remember that quality software is everyone’s goal, not just a management directive. The team must own their process and take pride in delivering the highest quality product, by holding itself accountable.

How to Use Defect Data for Quality Improvement

Using defect data, in our case we use HP Quality Center, we can dive deeper, looking at the defect classification and the phase in which it was detected or originated; we can start to learn where in the process most defects are introduced. We have started using root cause analysis (RCA) during Sprint Retrospectives as well; having a candid and honest discussion as a team, where defects were introduced, and what quality actions the team could use to keep the defect from occurring in the future. With this activity, we can design the development process to prevent software defects, rather than just try to detect defects for future fixes.

Imagine that more defects are originating in the requirements phase than elsewhere in the process. We can undertake quality actions, such as to ensure that JAD sessions are occurring between customers and Analysts to get a better understanding of their requirements; we could build in good peer review steps to share knowledge between more experienced and less experienced Analysts; or have structured Three Amigos sessions with Developers and Testers to get their perspectives and remove ambiguity of the requirement.

Over time, and with metrics-based, small, continuous process changes, the steps where defects are injected, or failed to be detected, will change—the team can continue to monitor the defect data with dashboards or big visible charts, pinpointing where in the process to focus Sprint Retrospective discussion and quality actions.

Over all, as we improve the process, the defects will be detected earlier and earlier, because the earlier we detect that something is wrong, the less it will cost to fix.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s