In agile software implementation projects, it is common to conduct testing throughout the build stage. This includes testing at the end of each sprint, commonly referred to as release point testing, and again at the end of the build stage, known as end-to-end testing. Release point testing during the build stage is critical to the project’s success because it allows the client to review the work delivered in the sprint and provide relevant feedback before the next sprint starts. This allows for agile course corrections and improvements. Identifying and correcting mistakes early can prevent them from snowballing into bigger issues requiring more time and effort to fix.
The challenge of testing often lies in encouraging your client to test more. However, occasionally, you may encounter the opposite situation, where a client team tests excessively and provides an abundance of feedback. While this is a rare “good problem to have,” if not addressed, it can still negatively impact project hours, scope, and overall frustration among implementation team members.
When is too much testing a bad thing?
When testing becomes too time-consuming to manage, it can hinder your ability to respond and make corrections quickly, which is the opposite of agile. If you find yourself overwhelmed with meetings and emails trying to make sense of a large amount of feedback, it’s time to make a change. If you feel like you are starting to approach this problem, keep an eye out for three common pitfalls that lead to excessive feedback:
- Irrelevant or duplicate feedback
- Conflicting feedback
- Quicksand questions
Each of these pitfalls is often small and easy to miss on an individual level, but they add up and take a toll on the project’s health in terms of hours lost and also on team morale. Here is how each of those pitfalls commonly manifests itself and how you can correct these issues.
Note: Testing can be captured in a ticket, spreadsheet, email or a record and might be referred to as feedback, issues, enhancements etc. I have tried to stay tool agnostic in this post and generally refer to any feedback as a “Feedback Record.”
1. Irrelevant or duplicative feedback
Irrelevant feedback may or may not be valid, but is not directly related to the assigned test script. This often occurs when testers who will not be the end-users of a particular feature or process are assigned to test. This can lead to vague feedback based on anecdotal experience (“I don’t do this myself, but I’ve heard this is an issue…”), imagination forecasting (“what happens a year from now when this becomes my problem…”), or scope creep (“My department is downstream of this process, so I need …”). Each time this happens, the implementation team has to spend time and mental capacity processing and determining the relevance of the feedback. It may even require precious meeting time to discuss with the entire group.
Duplicative feedback occurs when too many people log the exact same feedback. While group consensus is beneficial, duplicate feedback leads to wasted time responding to multiple people for the same reasons. These feedback records should be consolidated and entered only once so that the implementation team can use their time efficiently.
When you find yourself falling into this pitfall, check if you have too many testers assigned to the test script. Its a simple solution but effective. I recommend reducing the amount of testers assigned to the test script to no more than three. Ideally, the assigned testers should always be the same people who will directly use the feature or process they are testing in their daily organizational processes post-go-live. Doing so reduces the risk of duplication and ensures that the focus of the testers remains engaged in the substance of the assigned test script.
2. Conflicting Feedback
Conflicting feedback is exactly as it sounds: too many people submitting feedback which contradicts another piece of feedback already submitted. Conflicting feedback is possible anytime you have more than one tester assigned to a test script but the risk increases exponentially with each additional tester. When too many people are providing feedback, unless the feedback is unanimous, it often leads to meetings or broader discussions to understand and make decisions. As a project manager on an implementation project, it’s crucial to protect my time and the workload of my business analysts (BAs) so they can focus on their work. Catching conflicting feedback early is essential to avoid wasting time in meetings and to prevent BAs from implementing changes that they might have to undo later.
To minimize conflicting feedback, it’s a good idea to implement a pre-approved decision-making system. For example, I often suggest clients use a majority rules decision-making process. If a conflict arises among the three testers, the implementation team should always resolve conflicts in favor of a two-thirds majority rule. If the client does not agree to a simple two-thirds majority rule, another good system is to designate tie-breaker authority to one person. The client must designate one person in advance as a decision-maker and this person can be tagged or added to the feedback record with the sole duty to review and make a final decision.
It doesn’t actually matter what decision-making model you choose, as long as you are in alignment with the client. Then, when the implementation team is navigating the inevitable conflicting feedback, they can leverage the system, document the decision made and move on with the correction saving countless hours of meetings and potential arguments.
3. Quicksand Questions
Quicksand questions are what I call questions that are valid, but not actually relevant at the current moment or to the context of the test script. The main goal of testing is to identify bugs and issues in the build. Quicksand questions tend to draw multiple people into a conversation and slow down the work, but without contributing to the quality of the specific test script being reviewed. Furthermore, dismissing these questions can be challenging for the project manager because, at first glance, the question appears valid, and ignoring or dismissing it may lead to client frustration.
*Note: Quicksand questions differ from irrelevant feedback in that irrelevant feedback often involves a request for a change, fix, or correction that is unrelated to the test script. Quicksand Questions usually stem from a client’s learning or an issue in understanding, but do not require any change or correction.
The best way to correct quicksand questions is to acknowledge and redirect. When a client asks questions, it shows engagement with the material and consideration of its impact. Acknowledge the importance of the question and the learning being demonstrated. Then, redirect the question away from your implementation team. You can start by leveraging your existing feedback tool or ticketing system to mark questions with a status or type of “Question.” Next, instruct your implementation team to ignore these questions to focus on actual feedback. Finally, as the project manager you can collect these questions and add them to a help document, a future meeting agenda, or address them yourself individually.
By having a system in place to catch and redirect these questions it shows the client that these questions are welcome. But it also demonstrates that the implementation team’s time is valuable and needs to be protected for deep concentration work.
Conclusion
At the end of the day, too much testing is a problem I am usually glad to have! I would much rather have an over engaged and active client than a distracted or disengaged client. Having more feedback will usually lead to a higher quality implementation. But in some cases, it can take a slight adjustment to better focus the testers to provide more relevant and targeted feedback. When you find yourself caught in a black hole of endless feedback, consider reducing the testers to three, ensuring only future end users are testing, implementing clear decision-making rules and filtering out distracting questions. Being proactive with these steps to reduce an overwhelming quantity of feedback will in turn increase the quality of the feedback which will be easier to resolve, correct and turn-around for re-testing.