Why Testers Stay Silent (and What to Do About It)

April 18, 2025

I recently read The Design of Everyday Things by Donald Norman. This highly recommended book explores the interconnection between human psychology and the way we interact with the designed world. Whether the technology is physical or virtual, our experience using a product is not only shaped by elements of design, but also our thoughts and emotions. 

In Chapter two, Blaming the Wrong Things, Norman explains a fascinating psychological phenomenon that I think most people can relate to and perhaps we can apply to gathering feedback on software implementation projects. This phenomenon is simple:

People interpret failure and success differently based on whether they are observing others or experiencing it themselves. 

I see this phenomenon most often in the testing phase of software implementations. During this time we are asking the client to try out a new piece of software, often for the very first time, and report back to us issues and bugs. But are the issues found due to poor design or are they due to user error? The way the user interprets this difference is directly connected to the quantity and quality of feedback submitted.

Who’s to Blame?

When we have to engage with a new piece of technology, whether it’s a new app or a new dishwasher, we look for design hints to guide us in the correct path. These signifiers can be in design elements that live directly within the tool itself like knobs and buttons, or documentation in user manuals with images and instructions. As we assess all the readily available information, we proceed with our user interaction and continue to look for signs, both positive and negative indicators, to give us feedback throughout the entire process. 

Intuitively, we all know we do this and we can likely all recall a time that we had to learn a new tool for the first time and struggled to correctly identify the correct process. However, according to Norman, what we see time and time again in human behavior is a readiness to judge the struggles or failures of others as a deficiency in skill or ability rather than a result of poor design. Especially if we are able to successfully complete the scrutinized action. The converse is also true. If we see someone succeed at a task, we are often to credit the design, not the individual. Norman summarizes this tendency well:

“It seems natural for people to blame their own misfortunes on the environment. It seems equally natural to blame other people’s misfortunes on their personalities. Just the opposite attribution by the way, is made when things go well. When things go right, people credit their own abilities and intelligence. The onlookers do the reverse. When they see things go well for someone else, they sometimes credit the environment, or luck.”

So no matter whether we fail and others succeed, or others succeed and we fail, humans tend to blame the individual first, not the design.  

This becomes a concern in testing: when users don’t attribute errors to the design, they are less likely to log feedback that can improve the overall quality and assist others from experiencing the same issue.

As project managers, if we are not aware of this tendency, we might expect that users will report all issues they experience and thus, if they do not log any issues, they are not experiencing any troubles. Yet, what we often encounter instead is silence. Norman points out that in certain contexts, especially with unfamiliar or complex systems, people internalize failure. Not because it’s their first instinct, but because: 

  • They lack confidence or assume others aren’t struggling.
  • There’s social pressure or workplace dynamics at play.
  • The design fails silently — it doesn’t give clear feedback that it’s broken.

As Norman says, “People feel guilty and embarrassed when they have trouble with everyday things. They are apt to blame themselves. The resulting emotions can be severe: people may become anxious, frustrated, and angry.” This guilt and embarrassment creates a culture of silence where the feelings of frustration and helplessness among people are kept hidden.

In software testing this is extremely detrimental to the overall success of the launch and eventual use of the solution and worth our time as project managers to understand and remedy. The first step is starting to understand the psychology of why humans may or may not be logging feedback, even when they are struggling. Once we understand this is a natural human reaction, here are three ideas for mitigating these dynamics and getting people to engage further in testing.

Three Ways to Encourage Feedback During Testing

1. Encourage open, blame-free communication early and often

The first step in combating this problem is encouraging a culture of openness and blame-free communication on the project. This starts long before testing does and can be expressed in a variety of ways. As a project manager, encourage curiosity and questions so team members feel comfortable expressing confusion. Look for ways to be transparent about mistakes and confusion so people are not ashamed to raise their hand. You can emphasize this by providing examples of your own testing mistakes or from shared anecdotes such as, “We are testing the system, not the user.”

2. Test as a Team

Because the risk of this phenomenon is a cloud of silence that hides problems from others, an easy way to break that silence is to test as a group! In this post, I go deeper into what that looks like, but if you find your team is testing in individual silos you may consider scheduling dedicated time for collaborative team testing. This involves scheduling a meeting to get all testers together in the same room (or virtual room) and test a process together by passing the same test record upstream or downstream and watching other people test. This will naturally encourage communication as team members work together to progress a test script from start to finish.

3. Create a Leaderboard or Dashboard

The larger the project, the more opportunities you will find for departmental segmentation. When testing does not naturally overlap, testing as a team may not be appropriate. For instance, between the fundraising and grantmaking departments – group testing may not be feasible. In order to create more visibility between unrelated departments consider creating a visual indicator to track and highlight feedback. This could take the form of a low-stakes leaderboard showing the number of issues or enhancements logged by each team or individual. Alternatively, you could build a dashboard with testing KPIs, such as:

  • Total number of issues found
  • Average time to resolve
  • Issues reported by department

The goal isn’t competition, it’s visibility and normalization. Sometimes just seeing that others are logging feedback can relieve personal pressure and encourage more participation.

Conclusion

Whether we’re interacting with a physical object, like a dishwasher, or a digital one, like a CRM, we bring our human psychology into the equation. One of our common tendencies is to blame ourselves for failure, even when the real culprit is poor design.

When this shows up in software implementations, it can lead to underreported bugs, poor testing, and ultimately, less successful solutions. To address this, encourage open dialogue, build team-based testing opportunities, and create visibility through visual tools like leaderboards or dashboards.

By better understanding how people experience and attribute success or failure, we can create more thoughtful, supportive environments that improve software testing and outcomes for everyone involved.


Profile picture

Written by Saul who lives and works in Boise, ID. You should connect with them on Linkedin.

Click here to read my AI Statement.