In the first part of this series, we identified some of the myths being circulated in the QA community. My argument is that 100% QA Automation is not advisable and the need for Manual Quality Assurance will always exist because of these fundamental problems:

  • The lack of quality requirements documentation specificity make it impossible to define expected behavior enough to code around and create PROPER assertions.
  • Performing automated checks of whether software meets business needs is VERY difficult.
  • There is a fundamental critical path problem where producing valuable robust reliable tests is simply NOT VIABLE, given when software is first made available.
  • Manual QA is REQUIRED and faster for first-pass, real-time functional testing.
  • Like everything, Automation is a TOOL, but not a silver bullet.

We hate to say it, but your requirements probably suck. Agile is not an excuse to not provide requirements. And because your requirements are atrocious, the expected behaviors are not being fully defined. This makes Manual Quality Assurance difficult, but also real-time QA functional automation impossible. If nothing else, Lessons Learned in Software Testing has helped us to understand that “Automating without good test design may result in a lot of activity, but little value”. You need the human element to make judgment calls.

If we look back to the Manifesto for Agile Software Development, it values the following ideas:

  • INDIVIDUALS AND INTERACTIONS over process and tools
  • WORKING SOFTWARE over comprehensive documentation
  • CUSTOMER COLLABORATION over contract negotiation
  • RESPONDING TO CHANGE over following a plan

That is, while there is value in the items on the right, we value the items on the left more. A straight literal reading of these ideas is a mistake. For example, if Johnny takes a cruise and is suddenly (and sadly) lost at sea, the knowledge stored in Johnny’s head is useless unless it’s recorded somewhere. Creating documentation, or just writing things down, keeps teams on the same page with a single source of reference-able truth. The level of documentation you create should correspond with the complexity of the feature(s) you’re documenting. Obviously, large, complex features require more documentation than simple ones. It’s a fact that you cannot test whether software performs as expected until you define the expectation fully. And the entire teams needs to understand expected behavior.

The Westfall Team’s Writing Testable Requirements tells us that natural language is ambiguous – depending on what word we place the emphasis changes the entire meaning of the sentence. For instance, if we stress MARY then someone might ask, “If not Mary, then who?”. If we stress HAD then someone might ask, “What happened to the poor lamb?” and so on. As a result, QA performing requirements analysis for testability is key. Quality Assurance must determine whether a requirement is testable. During testing, can we say that the requirement passed or failed without question?

Mary had a little lamb.
Mary had a little lamb.
Mary had a little lamb.
Mary had a little lamb.
Mary had a little lamb.

Common “West Coast” software development process is lean and mean. I’d argue everything in moderation. Don’t hate the word process. Process is series of steps to achieve a particular end. And don’t hate organization either. Hate fat processes, yes. Hate unnecessary process, yes. But don’t hate organization. If Marie Kondo has taught us anything, it’s that everyone can use some tidying up!

Let’s not forget about verification and validation – the distinction between the two is:

  • Verification: Are we building it according to the requirements? Are we building it right?
  • Validation: Are we building the right thing?

This isn’t always properly captured and again, we need humans to understand the context in which this application lives to know whether it’s truly delivering the business need. Development often changes and negotiates requirements during the actual development period. This gives Quality Assurance very limited time to react. How would the automation test writers react in time? The very definition of an application in development is instability. How do you make reliable robust tests?

There’s a fundamental time problem here. If Quality Assurance is supposed to be off the critical path, or transparent to the continuous integration and continuous delivery (CI/CD) process, how is it that they have access to the finished application, have responded to the changes, have time to write and debug their tests, and have tests available to run by the time development has the finished feature ready? And because the happy path is typically implemented for automation, it usually doesn’t make sense to implement out-of-order, boundary or negative tests.

So, what’s the resolution? In the final installment of this series, we’ll discuss how to implement a solution. Do you have questions about the ideal software pipeline for automation? We’d be happy to share examples that we’ve found that work for many companies, just like yours. Contact us today!