• Graham

The 5th Myth of Software Testing - Zero-defect software is achievable

Updated: Sep 16



Saving the best until last.


No, no, with a side of no.


100% defect-free software is impossible (obviously within the context of the complexity/desired outcome/quality goal scale) Any defect policy is ultimately defined by the level of tolerance an end-user is prepared to accept Why set yourself up for failure by positing an unattainable goal? In a relatively bygone age, the space shuttle program was considered amongst the best examples of software engineering with an estimated 0.1 defects/KLoC (per thousand lines of code)

There is a general consensus in some areas of the safety critical systems community that a fault density of about 1 per kLoC is world class. It's also worth bearing in mind that safety critical / mission critical software focus has a strong focus on the 'no-boom' effect. It has an absolute effect on the acceptable tolerance. Commercial software by contrast has around 4-20 KLoC.

Ref: Software in Safety Critical Systems: Achievement and Prediction https://www-users.cs.york.ac.uk/tpk/nucfuture.pdf


What does this mean?...Whatever you want it to mean. It's all about the context. Would I worry about defects in candy crush? Probably not. Would I worry about defects in my banking app? Absolutely.


People who would maintain otherwise are either woefully misinformed about the complexity of software development, natural politicians or invent their own definition of 'zero-defect' as to make the term meaningless.


These people are out there, none I've come across has been a QA engineer. Read into that what you will.

The team I recently coached had around 5000 lines of code with around 0 formal defects that either we or the customer had identified. This is both a cause for concern and a cause for celebration. A cause for concern as I know that this just isn't possible. A cause for celebration because the team wholeheartedly adopted the shift-left mindset, we have valuable 3 amigo and refinement sessions, our AC's are descriptive, we have copious unit tests of value, we undertake exploratory testing, we review everything we can, we provide evidence of the thing working and the customer had ample opportunity to provide feedback on regular iterative releases.

People who would maintain otherwise are either woefully misinformed about the complexity of software development, natural politicians or invent their own definition of 'zero-defect' as to make the term meaningless.


The objectives of ‘testing’ a product are multi-faceted:


  • Verifying the expectations marry up with the documentation,

  • Undertaking best efforts on the developed feature(s) and verifying (within the constraints) that the feature exhibits the desired outcome(s) for a known/unknown user at a point in time given a known configuration according to the actual business requirements…and helping to ensure that the customer takes delivery of what they actually wanted…also…

  • Verification of release artefacts

  • Verification of infrastructure

  • Verification of documentation produced during the dev cycle

  • Verification of any non-functional ‘ility’s’ — Accessibility, UX, Security, Performance

  • Promoting and advocating for a quality mindset each and every day

  • Attempting to ensure that defects don’t make it as far as the codebase by the application of robust refinement / 3 amigo’s / test designs / collaboration


Defects can, and will, exist in each of these areas. Software is developed by people with all the inherent bias and fallibility of people.


Quite simply, zero-defect policies are counter productive. They impart an unneeded pressure to a development team that will already be under pressure. They are also un-provable. It's far better to advocate and educate on the need to develop a quality mindset throughout each stage. Judge each defect on its merits at the time.


If defects that matter escape into the wild then undertake some root cause if the defect is severe enough. It's all about context.

©2020 gesqa.com