Sunday 27 November 2011

Test Cases vs Test Ideas - Looking for assistance


Michael Bolton (http://www.developsense.com/) commented on my blog about moving towards the concept of Test Ideas. There is still much that I need to learn about using Ideas versus Cases.  I feel though, that this maybe how originally I had the team create the maps before moving to the Given, When, Then format. I wanted to testers to think freely about scenarios, instead of having a dedicated way to complete a “test case”. We simply did not have expected results on our maps at all. Was what we were doing Test Ideas?

Our applications and the system that we test allow for multiple ways to complete a certain task, therefore it is more of an “idea”. As I continue to do research and to grow I feel there will be more clarification in my mind of the true difference between “test cases” and “test ideas”. I’m on a journey here of learning and growing, which excites me.

Please comment and share thoughts about this. Do you have a good blogs, books or articles that I should read? How do you define a test idea? Do you use ideas?


Update: Blog link from Darren:
http://www.bettertesting.co.uk/content/?p=1438 

Our Current Test Process


There have been several comments on past posts about the importance of testing changes to the system. This is done by the team currently, and is capture in reports. I thought I would simply provide some additional information for clarification.

Our process dictates that when we test a build we do the following:
  1. Run Acceptance Tests.
  2. Test all changes. This is done through using our bug tracking system. When a build happens, the bug is moved to a Verify state. The test team self organizes and each tester decides what to test and verifies the issues. Each change the developer does to the code is matched to either a bug in the bug tracker, or to a User Story or Task on our sprint board. This ensures we test all the changes.
  3. Run through all other test cases based on Coverage Level. In an earlier post I said Priority, but in truth it is Coverage Level. That level is determined at the beginning of the sprint on how much testing we require on a map. Remember, each functional area of the product has one map. We test the maps where the changes occurred to ensure that no other areas were broken by accident.
  4. Results are updated on our Dashboard.
  5. A summary is sent out to all those that are interested, which include a copy of our Dashboard, a list of new bugs found, and the list of verified changes.
I hope this post gives you some clarification on what we do when we receive a build to test. A future post should also write about our overall process during a given sprint.

General Update


The past few weeks I have received a lot of feedback on my blog through comments, email and Twitter @nmacafee. Each comment, good or bad, has been very helpful. I am a tester like many of you reading this blog, who wants to grow and explore new ideas and concepts.



Thank you to everyone that has commented, as I find this to be encouraging. I feel what I have been doing to change out we test within my organization is justified, but there is still plenty of fine-tuning to do. It was almost a year ago now that I first came across mind maps, and shared this idea with the team. There were several confused faces during that first meeting, but they were willing to give it a try. 

There has been a good cultural shift within the team. Developers are now comfortable with the mind maps and are even using them for their own use. Other test organizations with the company are coming to me for guidance on how they could implement mind maps for testing with their group.  There are still many skeptics out there who do not believe that this is a real “test management system”, but I continue to push forward turning those people into believers.

Change is a good thing. We need to move past the conventional methods that we use to test simply to shake things up.  Standard test case writing still has a place and work for many. Mind maps have though opened up the door to exploring new ideas within my team. We have saved time, which I’m excited to share, has led to all on my team learning some basic automation. When we get a test case automated it is now marked on the map. There is still much to do, but I have been very encouraged with the progress we are making.

Agile Ottawa Meetup - User Stories


On Thursday night this week (November 24, 2011) I attended the Agile Ottawa Meetup Group http://www.meetup.com/Ottawa-Scrum-Users-Group/. I have attended in the past before, but did not have a very good experience unfortunately. No details will be shared, but I put that aside and thought I would go again. This session I am happy to report was well organized and I learned more about Agile.

Below are my thoughts of the evening.

The meeting was focused on writing User Stories. The presenters, who were the “Product Owners”, gave us a mockup application that they wanted us to “develop”. The group self organized into four teams, of approximately four people. We then went through and wrote User Stories for the application. I have been writing User Stories for a while now, but what I did learn was A) another template that can be used B) mind set of those involved in writing can be very different.

Templates

This one I have been using:

As a <user>
I want to <task>
In order to <goal>

The new template I learned was:

In order to <goal>
As a <user>
I want to <task>

(Side note both fit well with Given, When, Then as well :) )

The new template simply rephrases the order to help it flow better. Depending on the scenario and phrasing required, I noticed that we used the different templates. Basically, we used what sounded right.

Testing vs product owner vs developer mindset can be very different when writing a user story. The teams that were mostly developers the User Stories were very technical, while tester’s User Stories were more general in nature. This reaffirmed to me in the Power of Three was necessary for User Story writing. The team should do this. Not just the developers, not just the testers and not just the product owners. ALL should sit together to write the User Stories as this remove confusion and helps start the iteration in a good direction.

Some good points that I picked up:
  •        Focus on what is necessary in a user story
  •        Do not over think the user story. Some went wildly off track from what the Product Owner wanted in the product. There is nothing wrong with thinking about how the application will work in the future, but over thinking can “cloud” our tasks for our current Sprint/Iteration.
  •        User story should be testable externally. With this we started to write three Acceptance Tests per User Story. This really helped us focus on writing clearer and shorter User Stories, and will help the team define Done as well.
  •        The presenter suggested that each User Story should take up no more than half of your Sprint/Iteration.
  •        If there is research involved in a User Story or task, make sure you time box it, or it can grow out of control.


I was happy to hear that some use diagrams to write their user stories. They didn’t really see or know that what they were doing was simply a mind map. Mind maps are just very useful and can be used for many applications. Using maps for User Stories will be something that I will like to introduce to my team going forward.

Some good (or bad) quotes from the night:

“Telling a developer how exactly it should look will take away the freedom for developers to add value.” (I am not in agreement with this. As a team, you should be working together to a common goal. Product Owners should know what is being developed, and testers should have a clear under standing before development starts (ATDD) in order to prevent confusion and test case rewrite at the end.)

“Keep User Stores big when out there, but when they come closer (in time) break them down, like the game of Asteroid.” (Again, I wonder about this. If something is not yet ready to be worked on for a Sprint, is that simply not on your backlog or as a requirement? Should you have large User Stories that cover future development, or simply statements for what they are?)

“The more self sufficient your team is, the more effective it is.” (I agree with this)

“Break down barriers between teams.” (Definitely. I was surprised how many at the meetup were working at companies were the developers, testers, UI developers and product owners worked on separate teams and even in separate locations! That does not sound Agile to me. I’m just very happy to not be in that situation currently.)

Overall I enjoyed the session and learned more. The most valuable item I got out of going though was how happy I am with my current job and the team I work with. Our team is quite mature when it comes to Agile and we are doing really well. Sometimes I think the grass is greener on the other side, so it was nice to see how other companies work (some are cutting edge, and some are still learning). I just know that I have a good balance where I currently am, and freedom to continue growing.

Sunday 6 November 2011

Priority


As our maps have grown in size, we needed to organize them in order to set priorities for testing.  Just like many standard written test cases, we gave each THEN node a priority.

Disclaimer. Priority, and using a Dashboard were based on http://www.satisfice.com/presentations/dashboard.pdf. This proved to be a valuable starting point for what we are doing.

Priority 1 – Acceptance:
This tests the basic functions of our product with simple data. We run these tests each time we get a build. As we may have several builds a week, we created one Acceptance mind map that pulled in all of the Priority 1 test cases.

Priority 2 – Acceptance+: 
Same as Priority 1 Acceptance Tests, but now with more complex data.

Priority 3 – Common:
Tests the main features of the application.

Priority 4 – Common+:
Covers all features that are not commonly used by a customer.

Priority 5 – Validation:
Tests field validation, corner cases, stress, etc

Xmind and MindManager both have five levels of priority by default, which matches this well. I am still considering reducing this to three levels and simply calling them High, Medium and Low, or Acceptance, Common and Corner. What do you think?

MindManager has good functionality that allows you to filter a map to show only the priority levels that you wish to see. This allows the testers to focus on what they are testing at a given moment.

Priority also comes into play before deciding what to test during a given sprint. During our sprint planning session the maps are discussed with the User Story. We know roughly at the end of this meeting which maps are affected by the new tasks. We also discuss as a team (development, testing, product and project management) what we should focus on during this week for testing. That helps us decide which maps should be tested and at what levels we should test the maps.

During the week, we test the maps that we should focus on, running the test cases in order of priority. Once all maps are completed with the requested priorities, testers are encouraged to test other areas. The goal before product release is to complete testing on all maps and all priorities. This information is recorded on the Dashboard as well.

For any given sprint, I can tell which maps were tested and what level was completed. An Overall Dashboard allows the team to visualize the amount of testing completed week over week. More detail about the Dashboard and the Overall Dashboard will be given in future blog posts.

Wednesday 2 November 2011

Given, When, Then


Back in May when I first started blogging about mind maps for software testing I stated that the maps don’t always explicitly state the expected results. As our maps evolved, we started to add in expected results.

Having expected results stated on the map is beneficial. Before I trusted the test team to understand the functionality of performing a given action. We have also started to share our maps with teams outside of organization that take care of testing that we cannot accomplish at our location. To be clear, trust is one of the most important attributes that a manager should have with his or her employees. :)

In August I attended Agile 2011. At a session presented by Elisabeth Hendrickson she taught us a very basic concept that I am sure that some in the room already knew. To me it was a valuable lesson. For the most part, I am a self-taught tester and there are always very cool basic tidbits that I am learning.

When writing Acceptance Tests, a simple and easy to use template is:

Given <a situation>
When <an action is performed>
Then <a certain expected result occurs>

During that session, I was thinking about our maps. I realized, why couldn’t I write the maps in this format as well?

I came back from the conference, created a template map, and shared it with my team. Using Given, When, Then format has improved our maps greatly. It gives the testers focus on what they are writing, and it is easy to expand our tests. What I really like is that each When (or Action) has a one or more expected result. In the past matching steps in test cases to the expected results was sometimes confusing. Different testers have different styles. Some testers would write a test step test cases, with 4 expected results.

Here is the basic template that I came up with. This template gives the testers a common syntax of how to create and update our test maps. In several cases the testers were already writing in this format without realizing this, there were just some fine tuning needed to update them.



What you might notice is that you can easily expand the branches and continue writing tests. Another option for those writing automated tests with tools such as Cucumber mind maps in this fashion would be a great front end for it. From Cucumber syntax I do see more possibility of expanding my template.

I will write more about priority of tests, etc in upcoming posts.  Thank you to everyone for your comments and questions here and on Twitter. Feel free to email me as well.

Tuesday 1 November 2011

Recording, Reporting and Storing


My post on the weekend generated several questions in regards to recording, reporting and storing test results.

On the project that I work on we have approximately 25 maps. Each map represents a functional area of the application. For example there is a map that documents the Install test cases. All of the maps are stored in a directory within a file management system. You could use SharePoint, or store the files within your source code repository system (Subversion, etc). There are numerous tools out there. The reason that we use such a system is that it gives us version control, history and the ability to “reserve” files so one person can edit the map without worrying about it being overwritten by another tester.

The folder structure could be the following:

\Master Test Maps
\Sprint 1
\Sprint 2
\Sprint X

If you are not testing by Sprint, you could also create folders for Builds, etc. Choose what works best for you.

When a new Sprint starts, a new folder is created for that Sprint. Copies of the test maps are made from the Master Test Maps folder to that new folder. Depending on the scope of the testing you wish to complete during a Sprint you may wish not copy all of the maps over. This is a simple process. Unmarked maps are copied over to the new folder ready for the tester to markup.

Our maps have evolved greatly and are now written in a Given, When, Then format (that will be a completely separate blog post to follow). The tester will walk through the map and will use the tools within the mapping software to say if the test case passed or failed. Xmind has a green checkmark icon (test case passes) and a red X icon (test cases fails) which can be placed on the Then statements.

Xmind is an excellent mind-mapping tool, but it does have some limitations. We are now using MindJet MindManager http://mindjet.com/. MindManager has several advantages, but one is the ability to run macros. A macro has been written that will “walk” through all the maps that were tested and will count up the number of Passed, Failed, etc.

Included also is one other file in the Sprint/Build folder, which I call the Dashboard. This file is key to recording and reporting the results. We use Excel for this, but any format would work, depending on the results that you want to record. Here is a sample template:

Test Map
Tested By
Defects Found
#Tested
#Pass
#Fail
Test Map 1
Sally
BUG123, BUG875
50
40
10
Test Map 2
Fred

20
19
1
Test Map 3
Cindy
BUG235
70
62
8

The template above is a very simplistic but will give you an idea of what you could do. The Dashboard we use has more columns to cover the level of testing Requested, Covered, Map Quality and other metrics to cover off N/A, Blocked, etc in regards to the test cases. (Future blog posts will go into Requested, Covered and Map Quality). The key to a good Dashboard is to make it as simple as possible to use and only record information that is relevant. For a long period of time we were not gathering specifics on number of test cases. I still wonder the value of this information, and go back and forth on that debate in my head often. In Agile, what I care most about is if the User Story is Done. Done to me means that it is coded, it has been tested, and there are few to no bugs remaining. To satisfy needs of the organization we now record this information, and it has proved to be beneficial on multiple levels.

When testing is done, the Dashboard is copied into an email that is sent out to the extended stakeholders.

There is a long list of other topics that I will be covering on this blog. Please keep your questions coming and I will address them.