Automated testing - not all scripts and unicorns
Updated: Jul 4, 2020
There's a lot written about the benfits of *automated testing. There's a lot of money to be made by tool vendors. There's not a lot written about the downside of *automated testing. A well articulated and well implemented test automation approach is a valuable piece of any development effort. I'm not anti-automated test effort, I'm anti-automation-as-a-silver-bullet.
The new starter
You have a new starter. It could be a junior dev.
It could be a junior QA.
It could be a BA.
It could be a PO.
That person will do exactly what you tell them to do when you tell them to do it.
They'll do it over and over ad-infinitum unless instructed to stop.
They'll never change unless you tell them to change.
They'll only observe or check exactly what you've told them to observe or check.
If what they observe or check doesn't smell right they'll attempt to carry on.
If that smell doesn't fit with their pattern, that smell will not trigger any curious, conscious, cognitive function.
If what they observe doesn't equate to what you've told them to observe they will stop, unless you've told them beforehand to carry on.
If what they observe is as a result of unanticipated behaviour, they may stop. They may or may not tell you why they've stopped. What they tell you may make sense, or not. It all depends on what you told them to tell you when unanticipated behaviour occurs.
If something prevents them from completing their primary task, they may attempt to carry on. They will do this if you've told them to.
They will do this only if you've articulated the possible interruptions.
They will do this only if you've articulated the precise method of recovery.
They won't comprehend the nature of the disruption and how it relates to uncompleted tasks. They won't comprehend the nature of the disruption and how it relates to completed tasks, they won't ask 'why?'.
Your new starter is automated test execution.
It does exactly what it is told to do based on an initial set of commands that were valid at a specific time for a specific outcome.
It can look for one (or a range) of things that it has been instructed to exactly look for.
It has no innate curiosity.
It doesn't think.
It doesn't reason (beyond the algorithmic constraints you've told it to use, which may or may not be correct)
It doesn't interpret.
It doesn't make value judgments.
It doesn't experience.
It doesn't question.
It can't utilise context.
It can't easily communicate in a meaningful fashion.
It can't utilise experience. It can't explore.
Automated test execution can provide confidence (this thing works as you've told me it should). Automated test execution can provide confirmation of uncertainty (this is not working as you've told me it should).
It can only do this in the context of being formulaic; of following a script; of blind adherence to a set of rules that may, or may not, be correct.
It may detect a non-conformity according to it's own, or your rule-set.
Once a non-conformity has been detected it will tell you (albeit in an abstract, flat, manner)
It won't attempt to understand the 'why'
It won't use critical thinking skills to work around the non-conforming behaviour. It won't use these skills to uncover related non-conforming behaviour.
It won't use heuristics.
It won't use models.
It won't use an oracle.
It won't use *common sense.
It won't stop execution based on a feeling.
The non-conformity may be a defect.
The non-conformity may be a feature.
The non-conformity may be environmental.
The non-conformity may be the result of a misunderstanding in communication.
The non-conformity may be as a result of the test code itself.
It has used no cognitive reasoning. It won't vary it's approach. It won't attempt a similar process using a different variable. It won't attempt a different (but similar) process using reasoning and modelling to compare the two. It won't use logs to attempt to understand. It won't reference a design. It won't tap your colleague on the shoulder to ask for a second opinion. It won't interrogate stack overflow.
It has simply followed the rules.
It just is.
Automated test execution never has, and never will be, a panacea to your perceived quality ills.