If you a project lead, project manager, or even tasked with the categorisation of User Acceptance Testing (UAT) findings as a member of the project team, this article will give you a simple 3 step method for categorising UAT findings.
When I started off in the digital project management systems or software integration (SI) world, I discovered a number of tasks that never seemed to be discussed anywhere on the net, not even in user groups. It fast became apparent, when untangling these tasks, that they were the lynch pins a SI project manager used between specific phases of a project to prevent the project from derailing. This article focuses on the task for categorising User Acceptance Testing (UAT) findings, which links the end of the UAT cycle and the beginning of the Defect Rectification phases.
There are various knowledge areas that focus on the management of User Acceptance testing, but I find the quality of the International Software Testing Qualifications Board (ISTQB), and their syllabi to be an excellent reference.
Below, I detail a tried and tested method for the ‘categorising UAT findings’ task. For ease I have broken the method down into three phases, with a few simple steps in each. However, before we get started, lets take a step back and set the scene.
BACKGROUND
We have completed development/configuration, released it out for some Quality Assurance (QA) by the Subject Matter Expert (SME) and then we hand it over for UAT, and wait. After the agreed timeframe, we find ourselves with a couple of hundred findings to sift through. What could have gone wrong. did we miss the mark by so much, what do you do next, panic? no.. just step back and take a breath.. lets start categorising.One last thing, before we go further, lets mark three spots on the map to visit along the way: the todo pile, the training lounge and the parking lot.
Phase One: the layer off the cake
To make life easier, let us slice through the top layer and remove anything that might be creating a distraction.
Step one identifies the feedback that has been provided with a design mindset. (In a related article on SI Project Management Personas , I talk about the tester-designer persona that explains this concept in greater detail). This feedback is usually the results of prior experience using similar software and exposure to other design approches. Whilst useful it can create noise in the process, as these are not actual defects, but potential design enhancements. A word of caution, if a pattern emerges in the feedback, it may point to a gap in the requirements gathering process that may need to be investigated. Otherwise identify and categorise these findings as trainer-designer findings and pop them in the parking lot for review later.
Step two differentiates between functional versus non-functional findings. Focus on picking out the functional findings first, and leave them for the next phase. Something of note, if you have invested the time and money into writing out test scripts, you are likely to have a reduced number of these findings (unless non-functional testing is included). This is risky in that users may become myopic , focusing only on specific functional testing outlined in the test scripts, meaning they don’t spot items that might further improve the user experience. This needs to be considered if one of the agreed success criteria is improved user experience. If so re-tweak your scripts to include non-functional tests. Otherwise park any non-functional findings in the parking lot.
Phase 2: getting stuck into it:
Now that you have removed the top layer, I firmly believe you are looking at four key categories, work your way through the list categorising the findings as detailed below:
- Enhancements: identify items that are clearly not in scope (not in the specifications documents). Before you shift them to the parking lot, I find it easier to enter into dialog with your client on whether any of these items relate to compliance issues or are a must-have before they go live, otherwise park them for later. If they are needed for go live, kick off a parallel scoping exercise to ready these findings for inclusion. This might be a good point to bring any other items you have already added to the parking lot.
- Clarification: these are the neither -here-nor-there findings, but it is also your negotiation tactic, if you lump enough in here you can walk away without selling your soul. What I have found is that most of these are usually enhancements anyway, but some of them may be explained away as a lack of knowledge of the designed solution. But there will also be a few defects buried in there. The tricky part here is how to tease out defects (to the defect pile) from the training needs (training lounge) and enhancements (parking lot). If this is not done properly, you will most probably find most of them in the defects pile, significantly increasing the costs of rectification and regression testing.
- Training: these findings usually arise from a lack of understanding on how the system has been designed to meet the requirements. These can be closed out with some training. I go ahead and add these to the training lounge.
- Defects: ok, yes your quality is good, but we are all human. There is always going to be something someone missed, so cop it on the chin and get it fixed. But hold up! dont just launch into fixes just yet. If the customer comes back to you to include some enhancements in the next release, you may be able to save on release time and costs. but otherwise, this is in the todo pile.
Phase 3: clean up
Now you must be thinking.. right lets hit the todo pile and start shovelling.. well hang on again.. here is what you do:
- Don’t forget any subsequent release will most probable cost you money and time. If you negotiate successfully and the customer agrees to bring some of the enhancements into scope, it would reduce the total cost of release management. This is one way to keep ongoing costs down.
- Once you are done negotiating and agreeing, time to start shovelling an the todo pile (along with any enhancements). Get all the agreed items across the line for the next lot of testing. Wasting any more time will simply cause slippage.
- If you also offer training as part of your delivery, pick out the items in the training lounge and add them to your training packs. Otherwise invest some time in up-skilling the internal training teams. This will ensure that all the users agree that these findings were lack of solution knowledge rather than a defect, in the first place. Otherwise, the moment that you turn your back to walk away, these findings that was accepted as training opportunities may end up being a crack in the delivery, leading to potential adopting failure.
- lastly, anything that didn’t get picked up from the parking as part of this delivery is your opening statement for the next tranche of work. You know it, the customer knows their staff want it. The business case is already written, all that is left is to lock it in.
If you find software testing an area of interest and want to expand your knowledge, the ISTQB Certifications is one pathway to build up your knowledge.