Violating the Laws of Physics: When the Business Imposes both Release Date and Features

Managing an Engineering team, when the company imposes both features to be delivered and the release date

Advertisements

In all the companies where I have worked, the business side has been “supportive” of Agile software development methodologies – in its own way J. They like the story, agree it makes total sense … except for the part where we talk about adjusting priorities and not committing ahead of time to delivering specific features by a certain date – even whilst recognizing that historically, priorities have significantly changed in the midst of a release.

 

Given that, for all practical purposes, headcount is fixed (budgets are rarely elastic), and quality is non-negotiable, this combination of fixing features of the release and the release date violates the laws of physics!  Only Engineering can estimate how long it will take to develop a certain of features (given fixed resources and without impacting quality). The Business team (product managers, VP Marketing, CEO, etc) cannot estimate the amount of effort a given release will take. Just like we don’t tell our contractor how long (or how much) it will take to remodel your kitchen, the business team must let Engineering scope the effort, and time, required for a release.

 

In this multi-part blog, I will present my recommendations on how to best manage a team in this environment. I hasten to say that I have not found the perfect solution and I am still working hard at f refining it daily.

 

Three important aspects drive the management techniques:

  1. Understanding, and communicating to the Engineering team, the “Why”: why the business needs to impose both dates and features … and why this is unlikely to change materially
  2. Communicating to the business teams what they can expect from the Engineering team, and establishing “rules of engagement”
  3. Understanding, and implementing, the Agile principles that are most helpful in this environment.

The (Legitimate) Reasons for Formal Release Processes

To be clear, a continuous release process, where new features are deployed as soon as they are developed and tested is ideal. Unfortunately, this is only possible in specific environments, e.g. self-hosted web-apps, and does not work for ISVs.

 

Most ISVs which sell to enterprises need to publish their 18-24 month product roadmap. Customers don’t just buy today’s product, but also tomorrow’s. This 2-year product roadmap is most critical  for startup whose buyers accept the risk of buying from a fledgling company because of the promise the continuous stream of benefits committed in the product roadmap. “Committed” is the operative word: because of its startup status the company must meet every single one of its promises in order to maintain the fragile trust of its customers.

 

Here are some common (legitimate) reasons that prevent “continuous” release, and require formal release:

  • The software is installed on customers’ premises. In this case, it is important to “version” the code. It would be impossible to manage communications, installation, or support, if each customer installed a different version
  • The overhead of introducing new features makes it too costly to release one feature at a time. Any function can cause this overhead to be too dear – even in the case of a hosted (SaaS) web-app:
    • QA – if a bug cannot be afforded (e.g. financial applications, regulatory conditions),
    • End-user communications & training: when the usage paradigm is changed significantly, deliberate communications and user training will be necessary. Similarly, some environments have seasonality that limit the opportunities to introduce something new (e.g. schools, retail)
    • Integration with 3rd party partners, and/or customer systems: will require a phase of joint testing upon any change in our software
    • Customer release processes: even in a SaaS environment, customers (e.g. in  mission critical applications such as e-mail, and/or sensitive applications like finance) will impose their own pace of deployment, and limit the number of upgrades to 1 or 2 a year
    • Marketing: the business may be such that opportunities to communicate to customers, partners and analysts are rare and costly, and consequently, it is necessary to group features in a release to maximize the excitement about product announcement.
    • Customer commitments: As the CEO, VP of Sales, and sales team scour the country, or the globe, in search of orders, they make commitments to customers in order to win deals, as to features being available by a certain date.

 

In all the situations listed above, one could argue that there is no reason why Engineering should not be involved in setting dates before commitments are made. And the point is correct. It is in everyone’s interest to involve Engineering before making commitments. This is called the Product Roadmap process … which we will discuss in a subsequent blog

Cloud Computing – The Miracle Tool for Testing

Cloud Computing eliminates restrictions due to the number of servers in the QA lab, and thus allows concurrent testing by developers and QA engineers. By making it easy to test often, and to expose early releases to the outside world, Cloud Computing will improve product quality

Does this story rings familiar? You are in a planning meeting for the next release, and learn that in addition to supporting Oracle 11g, the product will also need to support Microsoft SQL Server 2008 (or DB2, or mySQL, or PostgreSQL). Once the typical brouhaha dies down about how complicated this will be, how the whole code will need to be ripped apart, and how much time this will take, the Director of QA turns to you and asks for a couple of additional servers for the QA lab, so that the software can be tested on the two databases in parallel; minimum of three servers: 1 for the database, 1 for our software, and 1 for the test fixtures. The following day, it’s the developer lead’s turn to ask for more servers: need at least 1 “populated” database against which the developers can test, plus another set up for the daily build, etc.  Makes perfect sense … Except that no budget has been allocated for these servers! Soon you find yourself with your beggar’s cup in the CEO’s office, explaining to him, and the CFO, why your team needs these extra servers when “you already have so many!!”

Rejoice! Here comes Cloud Computing to the rescue ..

Cloud Computing could not only eliminate the need to purchase servers for testing, but also actually radically improves your ability to test, and thus improve product quality.

Cloud Computing, such as Amazon EC2,  offers the ability to deploy (and un-deploy) software on demand. One pays “by the hour” of computing used, and storage and bandwidth consumed. This is perfect for testing (by developers and by QA): compute load varies greatly over the cycle of the day, as well as the cycles of the release.

First of all, every developer can now have his/her own test setup against which to test. There is no limitation of hardware, no begging, borrowing or stealing from your colleagues for unutilized servers. One can just deploy at will. Furthermore, there is no restriction on the number of servers. So if you need to test a four-server cluster, you don’t have to hunt around for free servers, you just do it.

Similarly the daily build can deploy to multiple test environments concurrently and thus accelerate the validation of the build.

Finally, the QA team can also test in multiple environments simultaneously, e.g. Oracle and SQL Server at the same time! This offers the potential benefit of being able to test a much larger number of deployment scenarios, than would be possible using one’s own hardware.

Naturally, leveraging a Cloud Computing infrastructure, requires new tools.

First and foremost, all the tests must be automated. While technology has created virtual servers, it has not yet inventing virtual test engineers J.  Secondly, one will have to build tools to automatically deploy, e.g. from the build environment, the new version of the software, and the test fixtures, as well as collect the results of the test runs.

One can be quite creative with the test management tools. For example, if a test setup encounters a high-severity bug, you could configure your test software to pause the test, deploy to a second environment and continue testing in the second environment. This allows you to go back to the first test setup to troubleshoot, and find the cause of the crash.

Another fascinating advantage is that you can deploy demo or beta systems at will  (assuming your deployment model allows it.), and let your sales team or prospective customers to “play with” the early release. By making it easier to expose early releases of the product to the outside world, Cloud Computing further improves the quality of your product.

Will you save money by testing in a Cloud Computing infrastructure?

Obviously the answer depends … on your usage, but also on factors like how much data you need to keep permanently in the cloud. For example you may need to permanently store a synthetic database of a million users (it would be too slow to upload it each time). You will also incur higher networking traffic.

In addition, you may not want to move all your tests to the cloud. For example, you may want to keep your stress-tests, or longevity tests in-house, since these will be running 24×7, and you may want the option of running them on bare-metal.

At the end of the day, to me the attraction of Cloud Computing for testing is that it will increase quality (in addition to reducing costs). It will allow each developer to have access to a test environment at will.  It will create an additional impetus for test automation. Cloud Computing will also allow the concurrent deployment of tests to an arbitrary number of computing environments, and make it easier to give early access to your customers. Net-net, this translates to more tests in the same amount of time with less effort. It’s all goodness.

“Dailies” Bolster Creativity

Design reviews do not simply allow me to have my design reviewed, but also give me the opportunity to inspire my team mates with my own ideas, and kickstart brainstorming discussions – thus fostering team creativity

By pure coincidence, I recently listened to a 2008 interview of Ed Catmull, cofounder of Pixar and President of Pixar and Disney Animation on the topic of “Pixar and Collective Creativity”, by Harvard Business School IdeaCast.

The interview centered on mechanisms to foster innovation – for which Pixar is so famous. Ed Catmull’s emphasizes the importance of communication at and across all levels, and he constantly encourages anyone and everyone to share their thoughts, critique, and suggestions.

To this effect, he encourages all the teams at Pixar to have dailies: meetings at the end of the day where each of the animators shows to the rest of the team their accomplishments of the day, whether complete or not. This is a vulnerable moment where one has to show work in progress, warts and all, to colleagues (there’s always a bit a competitive spirit at work) and possibly Ed himself, if he happens to drop by. Yet, it is also a great opportunity to not only stimulate suggestions from one’s colleagues on how to improve one’s own work, but also to give ideas, or kick-start a brainstorm about the project in general, and other people’s work.

To me, the concept of dailies translates naturally to design reviews in the software development world, as I blogged a few days ago. I don’t necessarily advocate for daily design reviews, but certainly for frequent ones; most importantly early on, before foundational decisions are made, so as to actually benefit from the team’s suggestions.

Ed Catmull highlights another set of benefits of design reviews that are potentially even more powerful to foster team creativity (rather than just individual creativity) than simply having my design double-checked: my own work and ideas can inspire my colleagues, and the very process of reviewing my work can also stimulate brainstorming discussions about new concepts and ideas. This is powerful stuff!

Design Review Checklist

Design Review Checklist: useful during the design, as well as during the review session

  • What problem is being solved
    • What requirements does it tie to?
  • Describe the design
    • Artifacts that document the design
    • Specific challenges faced – and how they are addressed
    • Assumptions
    • Design pattern(s) used
    • Approaches considered / rejected. Why?
    • Evidence that this design works: e.g. prototype; tests
    • Walk through prototype – if one exists
    • Known / potential limitations
    • Confirm that all requirements are met – or identify those missing (if we’re reviewing work in progress)
  • Any new technology involved?
    • Why?
    • How well has it been tested?
    • Other candidates reviewed / rejected. Why?
    • New hardware / software that needs to be purchased / licensed for Dev or QA
    • New open-source packages added to product?
  • Impact on testing / QA
    • Tests that are now obsolete
    • New test fixtures that need to be built – or migrated from Dev to QA
  • Impact on product.  For the following identify, anything new and/or anything modified
    • Code version / Build / Unit tests
    • APIs? Interfaces?
    • Classes / packages
    • DB schema
    • Error handling
    • Logging
    • Security
    • Installation
    • Provisioning / Configuration
    • Monitoring / Management console
    • Online help / tips / messages
    • Internationalization
    • Product documentation / manuals
    • Troubleshooting guides / info for tech support
    • Marketing documentation
  • Next steps
    • Can the design be simplified?
    • Location of all design documentation / code / other artifacts / info on external stuff added to product
    • Complete test scenarios
    • Next steps in implementation
    • Next steps in testing
    • When will we be sure it works?
    • Who can help?

In Favor of Design Reviews

Think holistically; code-and-test incrementally
Design reviews give a chance for your teammates to contribute – and for you to communicate the impact of your proposed implementation

Because Agile software development methodologies place relatively low emphasis on design, little has been written on design reviews.

I personally strongly believe in upfront design (see previous post), and thus design reviews.

To me the same argument can be made about the importance of design reviews as is made for pair programming – and conversely I have a hard time understanding why one would advocate pair programming and not design reviews: “two heads think better than one”.

Any “bug” that can be found during the design phase, will cost a lot less to fix, than if it is found during the implementation phase.

Furthermore, the advice of my peers is most useful to me in the early stages than when I am 95% done. It’s a lot easier to incorporate their suggestions, or explore alternatives, when no code has been written.

In summary, in my view, the best approach is to spend time upfront figuring out the design, and once I have a good idea of what I want to build, to code it using the Agile methodology. In other words: “Think holistically; code-and-test  incrementally

So when should a design review be held?

As a developer, I want to hold a design review when:

  • I need help
  • I want to confirm that I am on the right track
  • I want to double check that I have not missed anything
  • I want to communicate some assumptions that I have made that impact other components.

The design review is important not only to validate the design, but also to communicate: what I plan to do, and how it will impact others: developers, as well as testers, and even tech support, documentation, and product marketing

In the next post, I’ll publish a Checklist. Its purpose is primarily as a tool during the design process itself, to make sure that all aspects of the design have been considered. It is also useful during the design review session as a guide for the discussion. Finally, you can infer from the checklist all the people that need be informed about this design, and ultimately the implementation

In Favor of Architecture Design

An upfront architecture design phase can save a lot of time, and pain, prior to entering an Agile code development phase. Particularly, when complex requirements, high performance or new technology are involved.

Agile software development methodologies seem to dismiss architecture design, in favor of incremental development, and refactoring as needed. In my opinion, investing in upfront design not only accelerates projects, but can avoid unpleasant surprises, and painful delays. Architecture design and agile methodologies easily work hand in hand: the architecture design phases focuses explicitly on eliminating technical risk. Once the technical framework has been validated, implementation follows, applying the traditional agile methodologies

The “Agile” arguments go as follows: identify a new user story / feature, write a test that fails (but that’s required to meet the user story), write the code to pass this test (as well as all preceding ones) – repeat. In the process, keep the code as simple as possible, and since the code is simple and since you have an extensive suite of tests that validate existing functionality, refactoring is easy and fast.  While this methodology is indeed very powerful, it is not universally applicable. In addition, there are intrinsic advantages for upfront design

The main advantage of upfront design is “doing it right the first time”. By spending time upfront analyzing all the requirements, and technical challenges, and by evaluating competing approaches, one can avoid many dead-ends that one encounters when following an incremental approach. In the worst case of incremental design, one may run into a “killer” requirement near the end of the project which causes a complete refactoring of what’s been done before.

Further, even if one ends up with the right implementation in the end, one will simply save time by coming up with the “right design” the first time, and thus avoiding multiple refactoring efforts. While some issues only come up as one codes, spending sufficient time upfront will almost always eliminate unnecessary iterations.

In some cases, however, a phase solely dedicated to architecture design is almost always warranted. For example:

  • To partition a complex project in multiple components that can be handed off to a team of developers
  • To work through complex – and possibly conflicting – requirements
  • To ensure critical performance, resource utilization or scalability requirements
  • To validate the suitability of new technology that will be incorporated into the product: completeness of features, interfaces, or performance and scalability.
  • To validate with end users the usability of User Interfaces

In particular, features that impact different layers of the code (e.g. UI, business logic, database) need upfront design in order to avoid time-wasting back and forth between developers. Letting the whole team work it out is simply not efficient. A recent such project was for us to enable an application for multi-tenancy. Similarly, I have found that any project that involves clustering, fault-tolerance or high performance requires a dedicated and focused design – and validation – effort. Finally, incorporating any new technology – like an open-source package – must go through a prototyping phase: you never quite get what you expect …

By the way, an architecture design phase, should follow the principles of Agile Software development: keep it simple, use incremental milestones that demonstrate completion of a subset of requirements.

To be effective, the architecture / design phase must limit itself to what is strictly necessary, namely what motivated the design effort in the first place: e.g. functional partitioning or performance validation.  Anything that can be left to implementation must be.

Finally, the design phase must be concluded with a design review! …. More on this later.

Pair Programming – Does Anyone Do It?

Pair programming: not as efficient as individuals working on their own, but provides valuable benefits: code reviews and joint ownership of the code

I was surprised to read an article in the New York Times about Pair Programming. “For Writing Software, a Buddy System” that advocated 100% Pair Programming.

The New York Times article and Wikipedia give good definitions of pair programming – so I’ll only mention that the main idea behind this methodology is that “two heads are better than one”. While one engineer actually types in the code, the other reviews it, not only for typos, but also for all kinds of “gotcha” in the design, or the implementation.

It is a no brainer that a pair programming setup will lead to better code, written faster than with a simple programmer. The more difficult question, however, is whether this is more productive than 2 developers writing code on their own? In my estimate, No: two programmers will generate better quality code working independently that a programming pair.

I have to admit I have never actually tried pair programming with my teams – mainly because I don’t personally know anybody who has actually done it, and could have overridden my prejudice against it.

This being said, there are benefits that we can leverage from pair programming:

  • Code reviews: are definitely worthwhile. The time spent by a peer, or preferably by a technical lead, reviewing the code, is good investment against basic bugs and errors of interpretation in the spec or the design. Code reviews help ensure consistency across the product in various areas such as configuration, initialization, error handling, logging, resource management etc. Consistency leads to a higher quality product.
  • More than 1 person knowing any given piece of code: is also great practice. Not only is it good insurance should the original programmer fall under the proverbial bus, but it also helps in debugging, and gives everyone a broader picture of the whole project. It also reinforces the XP principle that the whole team owns the whole code, rather than having a set of individuals who own certain pieces of the code. This shared knowledge is particularly helpful when doing troubleshooting.  Finally, this also builds team spirit, and one always learns from the work of others.

What are your thoughts on pair programming? I would love to hear comments from people who have implemented pair programming on a production project.