Tuesday, 1 November 2011

Recording, Reporting and Storing

My post on the weekend generated several questions in regards to recording, reporting and storing test results.

On the project that I work on we have approximately 25 maps. Each map represents a functional area of the application. For example there is a map that documents the Install test cases. All of the maps are stored in a directory within a file management system. You could use SharePoint, or store the files within your source code repository system (Subversion, etc). There are numerous tools out there. The reason that we use such a system is that it gives us version control, history and the ability to “reserve” files so one person can edit the map without worrying about it being overwritten by another tester.

The folder structure could be the following:

\Master Test Maps
\Sprint 1
\Sprint 2
\Sprint X

If you are not testing by Sprint, you could also create folders for Builds, etc. Choose what works best for you.

When a new Sprint starts, a new folder is created for that Sprint. Copies of the test maps are made from the Master Test Maps folder to that new folder. Depending on the scope of the testing you wish to complete during a Sprint you may wish not copy all of the maps over. This is a simple process. Unmarked maps are copied over to the new folder ready for the tester to markup.

Our maps have evolved greatly and are now written in a Given, When, Then format (that will be a completely separate blog post to follow). The tester will walk through the map and will use the tools within the mapping software to say if the test case passed or failed. Xmind has a green checkmark icon (test case passes) and a red X icon (test cases fails) which can be placed on the Then statements.

Xmind is an excellent mind-mapping tool, but it does have some limitations. We are now using MindJet MindManager http://mindjet.com/. MindManager has several advantages, but one is the ability to run macros. A macro has been written that will “walk” through all the maps that were tested and will count up the number of Passed, Failed, etc.

Included also is one other file in the Sprint/Build folder, which I call the Dashboard. This file is key to recording and reporting the results. We use Excel for this, but any format would work, depending on the results that you want to record. Here is a sample template:

Test Map
Tested By
Defects Found
Test Map 1
BUG123, BUG875
Test Map 2

Test Map 3

The template above is a very simplistic but will give you an idea of what you could do. The Dashboard we use has more columns to cover the level of testing Requested, Covered, Map Quality and other metrics to cover off N/A, Blocked, etc in regards to the test cases. (Future blog posts will go into Requested, Covered and Map Quality). The key to a good Dashboard is to make it as simple as possible to use and only record information that is relevant. For a long period of time we were not gathering specifics on number of test cases. I still wonder the value of this information, and go back and forth on that debate in my head often. In Agile, what I care most about is if the User Story is Done. Done to me means that it is coded, it has been tested, and there are few to no bugs remaining. To satisfy needs of the organization we now record this information, and it has proved to be beneficial on multiple levels.

When testing is done, the Dashboard is copied into an email that is sent out to the extended stakeholders.

There is a long list of other topics that I will be covering on this blog. Please keep your questions coming and I will address them.


  1. Looks like your matrix could be interfaced with the problem reporting database through a CVS file to accomplish test case to problem reporting mapping for the standups each morning.

    Excellent write up and I truly like what you have done with the tool.

  2. Very nice Nolan. I like how you've extended the capabilities of Mind Mapping by writing macros to meet your needs.

    Would be nice to see some examples of the maps structure, if you're allowed to share them here.

  3. Thanks Larry and Darren.

    I have looked at using Excel and moving that information into a reporting database that other Quality Center users information is pulled into. I need to explore that a bit further, but I want to know the true value of doing that first.

    Sharing maps of our actual test cases would not be allowed. I'll write a post that gives an explanation of Given, When, Then. Show priority, etc.

  4. So far so good, and wonderful experience reports.

    I'd suggest, as a next step, to grow away from the concept of test /cases/, and move into a more open notion of risks and test /ideas/. A test case, after all, is typically expressed in terms of a specific question that we ask of a program. A test idea is more open: can the program successfully deal with questions like this? What else is going on besides the answer to the question we're asking? This moves testing onto discovery and investigation, rather than confirmation—which is terribly important if you want to find problems.

    I really appreciate that you're running this series. Good on you.

    ---Michael B.

  5. Hi Michael,

    I will start reading up more about using test ideas. Do you have any suggested reading? This makes a lot of sense to me. I mentioned in an earlier post that testing is often like a minefield. We keep walking across the minefield the same way every single time, and therefore we don't find new minds (bugs). We need to break from from that set path (test cases) and move towards walking around, and looking in other areas (test ideas).

    Right now testing is very much just confirming if the test cases pass. With maps I hope they do more exploring. Now I just need to show this has value and help team grow in this direction.

    Thank you,