“Our method, though difficult in its operation, is easily explained. It consists in determining the degrees of certainty.”
QA’s Philosophical Evolution
The first code running on the first computers was not overseen by a Quality Assurance department. Programmers, often working with the lowest levels of code, were left to find their own bugs. It later came to be understood that, while programmers could attempt to minimize the defects within their code, in any system of sufficient complexity, they could not exhaustively eliminate them; furthermore, applications were soon expected to be more than just functional. They had to be visually appealing, simple to use, supportable, changeable, and economically valuable. As the concept of what an application should be grew, an actively involved QA department became a necessity.
With QA an established part of the development process, its philosophies evolved to keep up with the demands of stakeholders. The goal went from ensuring an application could function, to hunting for ways the application might fail, to its approximate current state: creating a time- and personnel-efficient process to minimize the risk of the product not living up to stakeholder expectations/desires. This latest paradigm represented QA’s entry into intellectual adulthood, entailing the recognition that all goals/assumptions about how a product should or does function have only a probabilistic relationship to the product’s actual functioning; therefore, the point of testing became to detect and arrest these faulty assumptions, providing developers with the clearest idea of how to improve an applications’ quality.
This view is strongly analogous to the scientific method: Its essence is to start from a place of uncertainty; develop deliberate, falsifiable notions of what would reduce that uncertainty; test and iterate on those notions; and ultimately conclude with a clearer view of reality.
The Initial Question
Scientific motivations vary greatly: areas of discovery find their origins in curiosity, greed, philanthropy, or even serendipity. Regardless of its inspiration, for the scientific method to proceed, an area of inquiry must be defined; moreover, it must be defined specifically enough that testing elements of a theory will allow them to be refined or discarded. This is roughly parallel to what a Business Analyst does with a stakeholder in the development process: they clarify the notion of what a product should be until its implementation could be, on a highly specific basis, tested against that definition.
A scientist’s refined notion of what they intend to test is called a “hypothesis,” and it’s specifically designed to be tested; one that can’t be either refined or discarded as the result of experimentation is of no use in furthering scientific knowledge. Requirement specifications mirror this closely: Requirements that lack specificity or falsifiability are of no use to testers; therefore, they must be defined in such a way that ensures specific actions can either provide evidence for or refute the idea that a product meets stakeholder expectations.
In the scientific method, the point at which a hypothesis is tested is called an experiment. Before an experiment can be conducted, experimental procedures are drawn up detailing exactly what will be tested and how. This procedure is defined in terms of what the scientists will change themselves (the independent variable(s), the changes that would signify the hypothesis had been strengthened/refuted (the dependent variable(s), and what will be held constant in order to increase the likelihood changes to dependent variables originated with changes to independent variables (the controls).
The creation of Quality Assurance test cases uses a similar three-part structure. Analogous to the controls in a scientific experiment, QA departments specify elements and conditions that must be present for testing to proceed. A common example would be attempting to recreate a production environment on test servers: Having completely different sets of users, entities, data, or technologies could account, individually or in concert, for discrepant behaviors; therefore, getting a better picture of how a product will function on a customer’s site necessitates controlling for as many of these as possible. Analogous to independent and dependent variables, QA departments define their tests in terms of atomic steps and expected results. Being able to predict programs’ outcomes based on specific inputs is the essence of ensuring functionality.
Neither scientific nor QA methodology end when the first tests are run. As our conceptions of physical reality always hold room for refinement, so do QA departments’ conceptions of product quality. The Nobel-prize-winning physicist Richard Feynman compared this process to working out the rules of a chess game based on limited observations of game states:
“You might discover after a bit, for example, that when there’s only one bishop around on the board, that the bishop maintains its color. Later on you might discover the law for the bishop is that it moves on a diagonal, which would explain the law that you understood before, that it maintains its color. And that would be analogous [to when] we discover one law and later find a deeper understanding of it.”
Similarly, testers must often seek a deeper understanding of the internal logic indicated by anomalous results; for instance, we recently wrote a test that was designed to ensure users could remove a tag from an entity, and the test, as designed, did not pass. It turned out, however, that the real issue was that entities couldn’t be saved when they had no tags – entities could still be saved with some of their tags removed as long as they were left with at least one. It was this further testing which gave us the information we needed for developers to implement a fix.
Drawing a Conclusion
The scientific process, viewed as a communal enterprise, extends beyond one scientist’s final experiments. Results are compiled, published, discussed, and reproduced. QA departments have similar processes: Reports are drawn up based on which tests passed, failed, or couldn’t be run; bugs are reported; and unmet requirements are highlighted. Based on those reports, QA typically makes itself available to development to aid in understanding and reproducing bugs. If project resources don’t allow for all known issues to be fixed prior to release, QA’s reports can double as disclosures to stakeholders about the product’s quality.
If you’d like your website or application idea passed through our Quality Assurance department’s rigorous, interrogative test processes, get in touch with us!
We’d love to help you develop your custom solution today.