A Context-Driven Guide to Release Test Reporting

As a test lead and test manager, I’ve participated in many software release cycles and over time I’ve developed a practical approach to reporting on the status of a product during a release testing cycle.

As I am testing a release candidate, I have a list of items I want to make sure I check (those checks could also include automatic checks we run) and I also want to test the product against a set of risks based on whatever changes happened since the last release. Such things can include new features, API implementations, bug fixes, tech debt, and any number of things that happen in a software development project.

We can experiment with test techniques that would help us cover the product in different ways. For example, we can design end-to-end flow tests that include those things we want to check, but we also vary a flow to run a test to also cover a particular problem we think could happen and see how that impacts the experience. Or, based on our knowledge and research of our customers and their needs and wants, we could design compelling scenario tests that take into account how our customers might use (or misuse) the product.

With software release efforts, large or small, it can be difficult to have a birds-eye view into the health of the product. There are many moving parts as everybody works toward a common goal with fast-approaching deadlines. Bugs, problems with testability, and other concerns can and do fall through the cracks. It can be daunting (sometimes paralyzing) for those making product decisions to say “Yes, let’s ship it” in the face of so many unknowns.

Testers and test leaders are in a unique position to support product decision-makers because we spend the majority of our time testing and learning the product space and we often see problems that others don’t. To provide these stakeholders with timely and accurate information about the product in a meaningful and methodical way, we have to carefully organize the testing effort. To see the method I use to organize test efforts, read my article on adopting Session-Based Test Management

As testing progresses and we receive test reports and statuses from testers embedded in various squads, we can take pertinent information and provide a Release Testing Report to product decision-makers. Then, we can have the necessary conversations about how the problems we found affect product quality. In my experience, when these conversations happen regularly, the quality of the product improves over time as fixes and improvements are prioritized. The whole team gains a better awareness of the problem areas. 

The main objective here: be product-centered and focus on the problems at hand.

Release Testing Report

To illustrate the thinking behind this process, I’ve included an example of a Release Testing Report, with explanations of each section.

The report is a mind map. I think mind maps are a powerful way to organize complex information that must be presented to other people. Humans are visual creatures and we respond well to information that is organized in a simple and visual way. I’ve also used Google Sheets, Excel, and business intelligence tools such as PowerBI to create similar reports. 

In a release testing cycle, there will be regular status report meetings with product decision-makers and development squads. Those are the times to bring up the release report and draw their attention to the issues. 

Setting the context

To set the context for this example, imagine that you are a test lead or a test manager who oversees the testing effort across three squads. Each squad has one or two embedded testers who report up to you. You take the information they are providing as they test and you create a release report twice a week, Tuesday and Thursday, which is when all the squad leads, product managers, and release managers meet to talk about release readiness. In these status meetings, you have the following Release Testing Report ready to go and you are prepared to talk about the problems. 

Release Testing Report Example


Release Information Box

This is an area to put information about release versions, build numbers, browsers, devices, operating systems, and release status. This is not a full list, of course. Every release effort will be different and require different information. 

The release status section aims to answer just one key question that every stakeholder will have during a release cycle: “Did you find problems that threaten to prevent timely release?”

In this context, as the test lead, you are an information steward and a quality advocate, here to hopefully make some of those shipping decisions easier to make. You want to give relevant information and raise the important issues without creating panic or defensiveness in your coworkers. By staying product-centered and by focusing on the problems, there is no time to panic. There is only time to get shit done.

The release status section is color coded:

  • Green for areas where we didn’t find significant problems, 
  • Yellow for areas where there might be a problem, 
  • Red for areas where there are definite problems, 
  • Pink for areas where we have testability issues, 
  • Gray for areas we won’t/can’t test this time around or haven’t started yet.

Test Strategy

The test strategy section is where you briefly describe how teams will conduct testing and the ideas you have for testing. Take one to two paragraphs at the most to describe your high level strategy. 

Notice that in this example, you made the determination that the teams will also need to do performance and accessibility testing on the release builds. Therefore, you have directed testers in each squad to dedicate some amount of their time to these types of testing. The testers then report (in written form) and debrief (in a meeting or via chat) on their results with you when they have completed testing. 

It would then be up to you to determine which findings are important enough to bring up to decision-makers in status meetings. This is a judgment call, based on a careful consideration of the factors and the decisions that have already been made, and after consultation with testers in each squad.

Product Areas

Product areas in this mind map are those parts of the product where testing is being conducted – the test coverage areas. In the example report, there are six product areas. In each product area, I represented a slightly different way to report problems. These are just examples to get you started. You decide what works for you in your context.

Product Area 1
This product area is colored yellow to indicate that the testers found potential problems that you feel are important enough for everybody to talk about. Three sub-areas were tested and the team found an implementation in Sub-area 1 to be potentially problematic. In this example, you give a high level status of the problems (UX issues) and the next step (discussing with design). If the stakeholders want more info, you provide a link to the test report that details the findings of that test session. 

Product Area 2
This product area is colored red and indicates that there is either a very important problem that was found, which definitely threatens on-time shipping or there is a testability issue interrupting testing. Or both. 

Product Area 3
This area is colored yellow and there is a link to an actual bug report which the test lead would like to discuss with decision-makers. The lead feels this bug might pose a larger threat, but there needs to be a group discussion to figure that out. There is also a testability problem in this area, which the lead wants to bring up, as it may also be a threat to long term efficiency.

Product Area 4
This area is colored gray because there is a testability problem that is blocking the entire effort in that particular area of the product. The test team is waiting on resolution before they can proceed.

Product Area 5 and 6
These areas are colored green, indicating that so far the test team has not found any significant problems that would threaten the release schedule and they had a relatively easy time testing – there were no major testability issues.

Quality

This section is all about the kind of problems we are looking for as we test. I use the Heuristic Test Strategy Model to help me think of the quality dimensions we should be testing against.

In this example, based on the needs of the project, we decided to also test for performance and accessibility problems, in addition to doing the testing needed to get the release out the door. Accessibility is colored yellow because we found problems that the team is discussing, but are not showstoppers. We just want the larger team to know what’s happening so we included that status in this report. Performance is colored green to indicate that we did not see any major issues with performance this release cycle. 

When we introduce these dimensions to our teams through reporting and discussion, they start thinking of quality in ways that lend themselves to solving problems. Instead of just talking about quality in a general way and getting nowhere, team members will start thinking about and referring to these dimensions more frequently when discussing the product.

If we can get specific about the problems, we can do more to solve them.

Leave a Reply

Your email address will not be published. Required fields are marked *