There is either obviously nothing wrong, or there is nothing obvious wrong.

Software Quality Assurance (SQA) is the part of a software project where someone other than the developer seeks to assess the condition of the current product or service. The SQA plan is where someone writes down how the software will be methodically tested.

Issues found can be diverse:
  • Failure to follow the functional specifications (missing or incorrect features)
  • Missing essential requirements
  • Unsatisfied essential requirements
  • Incorrect or unexpected behavior by the system
  • Inconsistencies within the system
  • Misleading or incorrect information presented by the system
  • Unacceptable performance or other usability issues
  • Failure to adhere to normal expectations for a platform (e.g. an iPhone app that shows email address without the ability to send email to it).
  • Failure to adhere to the UI/UX design
  • Discrepancies between the shared artifact and the actual system
  • Discrepancies between the user documentation and the actual system
  • Missing parts in the test plan itself

A developer is expected to test their own work first and by submitting it to SQA they either assert that it works, or they assert that portions of it work and make it clear which those are.

In theory, SQA should be an unsatisfying process of looking for problems that aren’t there.

The SQA specialist is not responsible for the quality of a software product or service: its creator is. The SQA specialist is someone willing to use all their powers of observation and experience to anticipate and check for problems the product might have. They are a member of the team willing to carefully perform some kinds of repetitive work. They report any exceptions they find and remain available to the engineer who will fix the problem. They are, in essence, a colleague who is willing to check the work of a software engineer.

Some software projects include built-in tests. Test-based development is more accessible for some projects than others; however, the idea of thinking about a software system will be tested before implementation begins is possible for all projects. A less literal interpretation of test-based development calls only for testing to be planned before implementation begins.

One important way to enable efficient and effective SQA is to increase visibility into operational and corroborating data. Imagine installing port holes to peer inside a machine and see its inner workings.

In a software project, developers use a set of techniques to peer into the operation of their software. Some of those techniques are only available to them because they understand the software and data structures. Using a source-level debugger or even printing out debugging comments is not accessible for an SQA specialist. Instead, reports, views or indicators that reveal a summary or log of actions, or that reveal the steps of a process including when they were taken and what was the result provide an opportunity for an SQA specialist both notice problems and design more sophisticated tests.

Occasionally a problem is found to be with the instruments of visibility themselves, but more often such instruments reveal actual problems in the system. The instruments of visibility and the product manifest quality in each other so long as people continue to gauge the accuracy of one against the other.

To increase transparency in SQA begin by writing down a test plan, which is the overall approach that will be used to ensure the entire system is tested, as well as specific details about how individual parts will be tested. I recommend a test matrix -- a spreadsheet that provides a set of repeatable tests and a location to note the results, refer to an issue number, or record questions to ask the developers or the project manager.

Transparency in SQA not only makes the SQA process more reliable and repeatable, it also supports the idea of evolving from rote SQA into what I would term self-testing products. Of course the idea would be to have a product that tests itself so thoroughly that no human need be involved. But, in practice, very few software projects are a candidate for complete test automation. At the same time, equally few projects can actually justify no automated testing whatsoever.

While automated testing doesn’t necessarily support or hinder transparency, transparency is very important the more that automation is put into place, lest we forget how to maintain the test systems. Instead, transparency in SQA should help usher in transparency by making it clear which areas are candidates for automation, and for providing important details about what that automation should be.