Software Testing - Meh-nual Testing

I wrap up my series on software testing today with a discussion of the roles of quality analysts and software test engineers along with the role that manual testing plays in software development today.

Over the course of my career, I've seen the role of QAs diminish significantly in organizations.  Management reasons that much of the responsibility for quality should be moved to an earlier part of the development ("shifted left" per the typical industry nomenclature) via automation and that the individuals responsible for developing the code should be responsible for testing it.  Budget also plays a part, since hiring for an extremely specialized skill at scale can get expensive, but, as usual, that's downplayed in favor of a more (ostensibly) sensible reason.

I'll admit that I'm in general agreement with that assessment.  Per my previous blog posts, I believe there's a lot of ground software engineers can cover via unit and integration tests that was historically left fallow or lazily thrown over the wall for others to handle.  This disavowal of responsibilities lengthens development cycles, shifts accountability to the wrong parties, and weakens the overall product offering.

However, there's a marked difference between ensuring software engineers are tasked with the appropriate responsibility and claiming that QAs are unnecessary, as many organizations seem to hint at.

The roles of a QA and software engineer, while complementary, are also in opposition.  A software engineer's role is primarily one of creation - adherence to a set of requirements and the best implementation of those requirements as is reasonably possible.  

A QA's role is one of investigation and skepticism - viewing both the requirements and the code produced from those requirements as incomplete and finding areas to poke holes into what was otherwise considered a complete deliverable.

As a manager, I would half-jokingly pit each side against each other in friendly competition.  I'd challenge software engineers to write code that preemptively met our QAs' exacting standards while murmuring from the other side of my mouth that the QAs should find every last nit to pick in the codebase.

When those two roles are married together in one position, it's letting the fox loose into the henhouse.  If software engineers are responsible for their own code, their unconscious biases will be hard to ignore.  Humans, in general, have a difficult time reviewing their own work with a critical eye.  It's why editors exist in publishing - to provide a more objective view of a writer's perspective (The irony that this blog is self-edited isn't lost on me, but I swear those typos and odd turns of phrase that dead end are completely intentional).

This doesn't mean self-editing is a dead end.  There are a few tricks, especially in software development, that can improve an engineer's odds.  

One is the aforementioned use of unit and integration testing.  Splitting the code into sufficiently small sections during implementation makes it much easier to reason about corner use cases than it is after writing 1000 lines of brilliant code and half-heartedly writing tests after the fact.  Utilizing unit tests also demonstrates the discipline needed to chunk up this work appropriately and is generally an indicator of a good software developer.

The second is simply time.  Our minds are non-linear.  In the middle of the creative flow, something may seem novel or simple that with a second look might not be so novel or clear, and you may find a better way of implementing something on the next pass.

Third, you can take advantage of pair programming.  Simply having someone else bounce your ideas off in real-time gets you out of your head and forces you to make your case for your brilliant idea that others may see as convoluted.  You've also got instant accountability.

Finally, you have code reviews and pull requests.  By utilizing these mechanisms, you're taking advantage of having an editor, or at least, metaphorically speaking, the eye of a fellow writer.  You also have the benefit of time again.  By the time someone responds with comments, you may preemptively have spotted errors in your own logic.

You can get far with all of those options.  But, if your organization is large enough, you should think seriously about employing dedicated QAs.

QAs have been trained to think critically about software and its interaction in your business domain.  In a healthy organization, they are less inclined to bow to pressure and ship defective code than engineers who have conflicting goals in the same scenarios.  This is beneficial to the overall product and, therefore, the overall business.

Given the shift-left mentality of quality and ever-restrictive budgetary constraints, it's difficult to justify a QA for each team.  Instead, QAs should be consulted from the initial phases of a project to create test plans and specific areas to test that are then handed off to the engineers for implementation (think of them as Product Managers for Quality, if you will).

Teams shouldn't rely on QAs to perform their testing for them, as they'll overburden them and shift the responsibility for quality in the wrong direction.  They should treat them as knowledgeable consultants and use them to identify testing strategies they can run independently or to identify tricky corner cases and scenarios.

QAs should also be utilized for performing spot testing in workflows as independent verifiers.  Finally, they should be the ultimate release gatekeeper.  If they find the current build or project unsuitable for release, the code shouldn't be released.

Because we're talking about businesses and money is at stake, there are natural exceptions to this case, but the manager should preface any report with "we're releasing this against the advice of our QA staff because..."  The manager shouldn't find methods to tease a different answer out of the QA that ameliorates the situation.  Just be honest about the current state of your software, and you'll go a lot farther than trying to put lipstick on a pig.

Finally, when it comes to employing software engineers specifically to write tests, I'd opt out.  We've had Software Development Engineers in Test (SDETs) at several places I've been at before.  The engineers themselves are capable, but, again, displacing the entire testing apparatus away from the software engineers' responsibility creates a huge conflict of interest and a gap in quality.  

If you're lucky enough to have engineers who tend to prefer quality-focused roles, find a way to put them in a QA position and give them room to expand on that position that allows them to create automated tools but still requires the development engineers on your team to write and maintain the automated tests.

In that scenario, you'll have someone who's quality-minded writing quality-focused tools, but still ensuring your engineers are accountable for their code.

Until next time, my human and robot friends.

Comments

Popular Posts