However, assessing how quality and testing is compromised by human cognitive biases is an important issue that hasn’t been fully investigated.
A cognitive bias is a systematic pattern of deviation from the norm or rationality in judgment.
It is a type of error in thinking that occurs when people are processing and interpreting information around them.
When starting testing, a tester is already influenced by cognitive biases through their personal judgments: which part of the product they believe contains the most bugs, who developed the functionality, the product history, etc.
It is important to know these biases in order to minimise their impact and manage them effectively.
Negativity bias is the tendency for humans to pay more attention to, or give more weight to, negative experiences over neutral or positive experiences. Even when negative experiences are inconsequential, humans tend to focus on the negative.
For example, testers will not easily sign off for release when there was undetected bug in previous versions or projects. They still want to perform more tests to overcome this failure.
In order to reduce the effect of this bias, it is always better to analyse each project/version in terms of risk and define objectives before starting the tests.
The pairing of objective and risk allows defining the measurable exit criteria (to determine if the product is ready to be released) in which the tester gives his release agreement.
Confirmation bias is the tendency to search for, interpret, favour, and recall information in a way that confirms the tester’s previously existing beliefs or biases.
In general, if we think that the code of a specific developer has more defects than the code developed by others, then we will believe that we should spend a lot of time testing that module.
However, being under the influence of these beliefs will tend to increase the risk of not detecting defects in modules developed by other developers.
To reduce the effect of this bias, it is advisable to review the test booklets, test plans, test suites by other team members before starting the tests.
Framing bias occurs when people make a decision based on the way the information is presented, rather than on the facts.
The same facts presented in two different ways can lead to people making different decisions.
For example, the decision whether to perform a surgical operation may be affected by whether the operation is described in terms of success rate or failure rate, even if the two figures provide the same information.
In other words, testers tend to validate only expected behaviour, therefore, negative tests are ignored. So, when writing test cases, we tend to cover all requirements with their expected behaviours and to miss out on negative flows as not all negative flows are specifically mentioned in the requirements.
This bias is the tendency to perceive oneself in an overly favourable manner. It is the belief that individuals tend to ascribe success to their own abilities and efforts, but ascribe failure to external factors (“the environment provided does not work”, “it works properly on my PC”, etc.)
This attitude has the effect of turning one’s back on continuous improvement, which is one success keys of testers.
In order to reduce the effect of this bias, we must apply the Japanese concept of continuous improvement, named ‘KAIZEN’ that prompts the question: What I could have done differently to minimise or solve this problem on my side?
The illusion of knowing is the belief that comprehension has been attained when, in fact, comprehension has failed. For example, the situation is wrongly judged to be similar to other known situations, so the person reacts in the usual way without trying to gather other information. Thus, the tester can under-exploit other possibilities to test the system or create new test cases.
This bias guides our perception and limits our ability to think outside of the box get out of the box, resulting in missed bugs.To reduce the effect of this bias, it is better to ask questions in case of doubt and validate the specifications as soon as the first drafts are available.
During the post-project analysis phase, we analyse the statistics and the deliverables (documents, reports, KPIs, software, etc.) and we tend to neglect the psychological effects of the success or failure of a project.