Who Owns Quality? Part 3

“Test early, test often” applies to performance testing – which needs to be run continuously starting at the architecture design phase all the way through the end of the project – ideally on a dedicated system

Test Early

… does not only mean that tests must be run during development, but even more importantly, testing must start during the architecture design and prototyping phase.

As one does not wait until after the release to QA to start testing, by the same logic, one should not wait for the code to be complete to run tests – and make progress towards “proving that the code works”.

More specifically, performance must be validated during the design and prototyping phase. By the term “Performance”, I include individual server performance, scalability, fault-tolerance, longevity testing, error recovery, behavior under stress, etc. While it may not be possible to test everything with a prototype, one certainly has a duty to validate as much as can be tested. The sooner one tests, on the smallest code base as possible, the easier it is to (a) identify performance bottlenecks, (b) fix any issues and (c) minimize the impact of such fixes on the project and other team members. As we all know, performance shortcomings are among the most difficult problems to fix, and whose resolution time is hardest to predict.

In fact, one of the fundamental exit criteria of the architecture design and prototype phase must be that:  It validates Performance.

Another reason, in my experience, to test during the design phase is to engage the dialog on Performance between the Engineering team and the Business Owners (Product Management). In the abstract, we all want faster performance with every release. Yet, one has to wait until a first round of performance tests to see how close (or how far) we are from a given target. Thus the cost / benefit analysis of improved (or decreased) performance cannot start until the first round of test results. Only then, can the time and resources necessary to reach the desired level be evaluated with some degree of accuracy.

In some cases, “forcing” my team members to run performance tests is the only way to have them read the spec J. As they go through their design, I often remind them: “If you don’t know how to test it, you don’t know how to design it.”

Test Often

… the other half of the “Test early, test often” mantra reminds us that performance needs to be tested continuously through the development process. We have all experienced performance being impacted by the strangest things. The worst “death marches” that I have experienced were the consequence of a serious performance issue found in the last days of the release. I strongly recommend running a minimum set of performance tests within each milestone (or Sprint, if you use scrum methodology).

My “best practice” is to run performance tests continuously – from the first day until the last day of the project — on a dedicated system that tests the last stable release 24×7 (e.g. from the last milestone/sprint). The architects, and developers, will have coded some automated tests that exercise the corner cases of performance, and even stress tests. Furthermore, running the tests over long periods of time — 2 weeks minimum with the same executables — also tests against memory leaks and other resource exhaustion bugs.

Who Owns Quality? Part 2

Developers must take ownership of testing their code for functionality, integration and performance

Let us examine the consequences of “Developers Own Quality”.

Quality is already in the code at the time when it is delivered to the QA team

In other words, the code meets all functionality and performance objectives.  The obvious consequence – as suggested by Extreme Programming (XP), and Agile Software Development – is that, in addition to writing code, developers must also test it. More importantly, developers own the results of these tests.

Too often, I have heard developers claim that their task was complete once they had provided Unit Tests along with their code. Writing unit tests is a good thing, it is an important and necessary step, but it is far from sufficient. Rather, developers must take a results-oriented approach to testing, and ask themselves: do my tests PROVE that my code works?

Beyond a comprehensive suite of unit tests, which validate basic operation of the code, two main areas must be addressed: (a) integration and (b) performance.

Integration testing leads us to another XP and Agile best practice: frequent integration releases (or milestones) to ensure that all newly contributed code plays well together. For example, two developers will have often a different interpretation of an API. While each may have done the right thing in their own mind, and pass their individually created tests, the code, once integrated, will not work.

So, why ask developers, rather than QA, to test integration and performance? It is simply a matter of efficiency.

The process of releasing code to QA, having QA set up their test environments, find a bug, make sure it really is a bug, file a bug, assign the bug, re-run the test for the developer, wait for the fix, verify the fix, verify that the fix did not break anything else that worked before, and finally close the bug, is just too long a process. It should only occur in exceptional circumstances, or in controlled situations (more later).

To me it is also a matter of pride. As a developer, I need to be confident that I deliver solid work-product to my teammates. Finding a serious bug in my code (whether functional, or performance), once I have released it, should be a major embarrassment. I often tell my team – jokingly – “If QA finds a Severity 1 or 2 bug in your code, you owe me fifty bucks!”, as an illustration of the level of confidence and pride that one should have in one’s code.

In summary, comprehensive testing, is part and parcel of development. A developer who is proud of his/her code, and proves that it meets all functional, integration and performance requirements, is not only an efficient developer, but someone who makes his/her whole team efficient.

Who Owns Quality? Part 1

Understanding what role in the Engineering team owns quality is critical to determining how we run our projects

Over the past twelve years, I have had the opportunity to lead the Engineering team in over a half-dozen companies, and have observed an incredible variance in how each of the engineers answered this question: “Who owns quality?”

For only one of the companies that I joined, has the answer met my own.

In my experience, answering this question properly – and building corresponding software engineering processes – is critical. How an Engineering team addresses the ownership of quality has fundamental implications on how it operates. It impacts just about everything!

  • The daily tasks of each developer
  • The daily tasks of each QA engineer
  • The selection of software development tools and artifacts
  • The sequencing of tasks in software releases
  • The ability of the team to deliver quality product on time

The vast majority of answers fall into two bins: it is either “Everybody” or “QA”.

While it is hard to argue against the philosophy that everyone owns quality, this is an empty, and non-actionable, answer. When “everybody” is responsible, no one takes responsibility.

QA certainly has a big role to play in ensuring that we deliver high quality products. However, there is a fundamental reason why QA does not own Quality: they have little control over it: QA does not write the code, developers do. Asking QA to own quality is akin to asking the proverbial blind man to define the elephant! Asking QA to own quality implies a process where Quality is added after the fact, once the code has been written. Let us remember what QA stands for: Quality Assurance, not Quality Addition, or Quality Creation.

We all know that quality has to be built in, not added on.

To me, the right answer is: Developers Own Quality.

… to be continued

About Software Engineering – from the Trenches

Software Engineering applies a holistic optimization to all the tasks, beyond coding and testing, involved in creating a software product

“Software Engineering – from the Trenches” chronicles what it takes to create a software product — in real life.

“Software Engineering – from the Trenches” is not only about “software development”; writing code is only one task – necessary, but not sufficient, to build a product. We will also discuss requirements, architecture, design, testing, release management, documentation, deployment, and support. One of the main themes in this blog is that Engineering is holistic and encompasses all these critical activities which, whether we like it or not, consume the time of each software engineer. One of our main goals is thus to approach product creation with a methodology that is optimized across all these activities. For example, while iterative development methodologies (XP and agile software development among them) are quite popular, we will advocate for, and justify, strong and detailed upfront design.

Before jumping into the fray of software methodology, our first series of blogs will focus on the roles of responsibilities of the different actors in Software Engineering: developers and testers of course, but also, product manager, release manager, consulting engineers, etc. Before examining team-level strategies, we need to first agree on everyone’s scope of responsibilities and mutual expectations. We each need to understand our, and each other’s,  job description before we crack open the playbook. Surprisingly enough, controversy has erupted whenever I have broached this topic with my team at each company where I have worked.

This blog is for you if … you are a software engineer, QA engineer, support engineer, product manager, release/project manager, software architect, lead,  director, VP, or CEO.  Anyone who is attempting to understand the mistery of software creation, anyone whose day job (and/or night job) involves software will benefit from this blog and will learn road-tested techniques to reduce stress, increase predictability and stimulate innovation.