Who Owns Quality? Part 3

“Test early, test often” applies to performance testing – which needs to be run continuously starting at the architecture design phase all the way through the end of the project – ideally on a dedicated system

Test Early

… does not only mean that tests must be run during development, but even more importantly, testing must start during the architecture design and prototyping phase.

As one does not wait until after the release to QA to start testing, by the same logic, one should not wait for the code to be complete to run tests – and make progress towards “proving that the code works”.

More specifically, performance must be validated during the design and prototyping phase. By the term “Performance”, I include individual server performance, scalability, fault-tolerance, longevity testing, error recovery, behavior under stress, etc. While it may not be possible to test everything with a prototype, one certainly has a duty to validate as much as can be tested. The sooner one tests, on the smallest code base as possible, the easier it is to (a) identify performance bottlenecks, (b) fix any issues and (c) minimize the impact of such fixes on the project and other team members. As we all know, performance shortcomings are among the most difficult problems to fix, and whose resolution time is hardest to predict.

In fact, one of the fundamental exit criteria of the architecture design and prototype phase must be that:  It validates Performance.

Another reason, in my experience, to test during the design phase is to engage the dialog on Performance between the Engineering team and the Business Owners (Product Management). In the abstract, we all want faster performance with every release. Yet, one has to wait until a first round of performance tests to see how close (or how far) we are from a given target. Thus the cost / benefit analysis of improved (or decreased) performance cannot start until the first round of test results. Only then, can the time and resources necessary to reach the desired level be evaluated with some degree of accuracy.

In some cases, “forcing” my team members to run performance tests is the only way to have them read the spec J. As they go through their design, I often remind them: “If you don’t know how to test it, you don’t know how to design it.”

Test Often

… the other half of the “Test early, test often” mantra reminds us that performance needs to be tested continuously through the development process. We have all experienced performance being impacted by the strangest things. The worst “death marches” that I have experienced were the consequence of a serious performance issue found in the last days of the release. I strongly recommend running a minimum set of performance tests within each milestone (or Sprint, if you use scrum methodology).

My “best practice” is to run performance tests continuously – from the first day until the last day of the project — on a dedicated system that tests the last stable release 24×7 (e.g. from the last milestone/sprint). The architects, and developers, will have coded some automated tests that exercise the corner cases of performance, and even stress tests. Furthermore, running the tests over long periods of time — 2 weeks minimum with the same executables — also tests against memory leaks and other resource exhaustion bugs.

One thought on “Who Owns Quality? Part 3”

Leave a Reply

%d bloggers like this: