Some classic testing mistakes
Some classic testing mistakes

Some classic testing mistakes

Below is a list of the most common mistakes made during the testing process, which i hope you’ll find useful and remember not to make. They are classified taking in consideration the following categories: A. Role of testing, B. Planning the complete testing effort, C. Personnel issues, D. Tester at work, E. Test automation, F. Code coverage.

A. Role of testing

  1. Thinking that the testing team is responsible for assuring quality.

  2. Thinking that the purpose of testing is (only) to find bugs.

  3. Not finding the important defects (being more focused on low priority defects rather than high priority defects).

  4. Not reporting usability problems.

  5. No focus on an estimate of quality (and on the quality of that estimate). Having a well-defined testing procedure, the test team can certify the software to a specific level of quality based upon the pass ratio of designed tests /coverage of the tests.

  6. Reporting bug data without putting it into context.

  7. Starting testing too late (bug detection, not bug reduction)

B. Planning the complete testing effort

  1. A testing effort biased toward functional testing.

  2. Under emphasizing configuration testing.

  3. Putting stress and load testing off to the last minute.

  4. Not testing the documentation

  5. Not testing installation procedures.

  6. An over reliance on beta testing (especially in the V and W models).

  7. Finishing one testing task before moving on to the next (sometimes this cannot be avoided due to project specific circumstances, the tester having in specific projects to either retrace his steps in order to compensate for new changes in the project/software system or put a task on hold pending a new addition/modification to the project/software system.

  8. Failing to correctly identify risky areas.

  9. Sticking stubbornly to the test plan.

C. Personnel issues

  1. Using testing as a transitional job for new programmers.

  2. Recruiting testers from the ranks of failed programmers.

  3. Testers are not domain experts.

  4. Not seeking candidates from the customer service staff or technical writing staff.

  5. Insisting that testers be able to program.

  6. A testing team that lacks diversity.

  7. A physical separation between developers and testers.

  8. Believing that programmers can’t test their own code.

  9. Programmers are neither trained nor motivated to test.

D. Tester at work

  1. Paying more attention to running tests than to designing them.

  2. Unreview test designs.

  3. Being too specific about test inputs and procedures.

  4. Not noticing and exploring “irrelevant” oddities.

  5. Checking that the product does what it is supposed to do, but not that it does do what it isn’t supposed to do (negative tests).

  6. Test suites that are understandable only by their owners.

  7. Testing only through the user-visible interface.

  8. Poor bug reporting.

  9. Adding only regression tests when bugs are found.

  10. Failing to take notes for the next testing effort.

E. Test automation

  1. Attempting to automate all tests.

  2. Using GUI capture/replay tools to reduce test creation cost.

  3. Expecting regression tests to find a high proportion of new bugs.

F. Code coverage

  1. Removing tests from a regression test suite just because they don’t add coverage.

  2. Using coverage as a performance goal for testers.

  3. Abandoning coverage entirely.

 

Source: ISTQB – Foundation Course.