Setting Expectations about Formal Releases with the Business Team

Product Management sets features and priorities – Engineering sets schedule … and meets schedule

While the business team may desire, or be obligated, to fix both the date of a release and the features that will comprise it, it is our job as Engineers to educate them on how unrealistic this approach is [under the assumption that staffing cannot be increased and quality is not negotiable]. It is also our job to offer alternatives.

By working collaboratively, we can redefine the desired outcome in a way that still meets the business needs and allows for a speedier implementation, without compromising quality.

The fundamental rule of engagement is that …

Product Management Sets Features And Priorities — Engineering Estimates The Schedule … And Meets The Schedule.

Engineering is a key contributor to the product roadmap, sometimes even the primary contributor. However, the Product Management (PM) team has, by definition, the ownership of the business derived from the product, and as such they call the shots when it comes to the definition of features, and priorities.

On the other hand, PM has no business pronouncing how long it will take to implement the desired feature sets – this is Engineering’s purview. This is no different than when we are dealing with a contractor to remodel our kitchen: we can tell them what we want the kitchen to look like, but he/she is not going to do business with us if we tell him how much he/she can charge us, and/or how quickly the job will be completed.

Why? It simply boils down to ownership and accountability

Engineers, by and large, are a good sport and will do whatever they can to meet even the most unrealistic schedule. Ultimately, however, hard work cannot compensate for a schedule that is plainly not feasible.

Since it is Engineering’s job to develop the product, having anyone other than Engineering make estimates, or worse, commitments about schedule, removes the ownership and accountability of meeting the schedule from the Engineering team.

Schedules Only Have Value If There Is A Reasonable Expectation That They Will Be Met.

We set a schedule for the release of a product for a reason: so that other teams inside (e.g. marketing, sales) and outside the company (e.g. customers, partners) can make their own plans based on the availability of the product at a given date.  If we don’t meet the schedule, these other teams will have to redo their plans and will resent us for it. Worse, if we establish a habit of missing schedules, they will stop making plans and just wait-and-see until the product is actually delivered. This can create a vicious circle where the Engineering team sees that the Sales team does not plan on the product being ready on time, and thus does not feel the pressure to deliver on time – which reinforces the Sales team’s attitude, ….

The best way – by far – to build reliable schedules is to let the engineers who are responsible for the delivery of the product estimate their own schedule. For two reasons, one because the estimate will be more reliable and two because the engineers then have ownership of the schedule.

I have worked with a few CEOs who did not trust Engineering with estimates, and who were convinced that giving impossible tasks to the Engineering team ensured that they could get every last drop of blood/work out of the team. This is plain wrong.

Once in a while you can indeed rally the troops to meet an impossible target and “save the company”. However, over time, it quickly becomes counter-productive. People will not accept arbitrary challenges and will simply dis-engage.

On the other hand, when empowered to estimate their own schedule, Engineers will then feel accountable for meeting it, and it will become a matter of personal, and team, pride to deliver on-time. Furthermore, this fosters a culture of success – and a virtuous circle of people being able to rely on teammates’ commitments.

Schedule And Feature Set Result From A Collaborative Effort

Pushing the kitchen remodeling analogy further: Usually, the first bid is too expensive. What follows is a discussion about how flexible the dates are, what are the critical elements driving the dates and price, and a series of “what if …” discussions. The same needs to happen between Engineering and Product Management: what flexibility do we have in the dates? For example, will the customer to whom we promised certain features actually go into production at the said date, or will they accept a beta release because they need to do their own tests in a Staging environment? Are all the options and variations of a particular capability required by the release date, or can some of them be pushed out to the next release?

One of the most satisfying moments of the job is when product managers and engineers brainstorm on how to meet the business needs of our customers in innovative ways. By bringing these two teams together and sharing the knowledge of customer needs, or why a certain feature requires a lot of work, or why by softening a specific aspect, implementation becomes much easier, we truly create value for the company. At the end of this brainstorm, we have truly optimized the features we offer AND the effort required to deliver them. The continuous repetition of this exercise allows us to deliver more, faster.

 

The final ingredient is to provide transparency to the Engineering process – which is frequently considered as a black box. It is because of this lack of visibility that people on the outside tend to buy themselves insurance and ask for a schedule that’s more aggressive than need be. In the next blog, we will show how Agile Software Engineering provides, among other things, not only visibility on the progress of the Engineering team, but also the ability to adjust the course.

Violating the Laws of Physics: When the Business Imposes both Release Date and Features

Managing an Engineering team, when the company imposes both features to be delivered and the release date

In all the companies where I have worked, the business side has been “supportive” of Agile software development methodologies – in its own way J. They like the story, agree it makes total sense … except for the part where we talk about adjusting priorities and not committing ahead of time to delivering specific features by a certain date – even whilst recognizing that historically, priorities have significantly changed in the midst of a release.

 

Given that, for all practical purposes, headcount is fixed (budgets are rarely elastic), and quality is non-negotiable, this combination of fixing features of the release and the release date violates the laws of physics!  Only Engineering can estimate how long it will take to develop a certain of features (given fixed resources and without impacting quality). The Business team (product managers, VP Marketing, CEO, etc) cannot estimate the amount of effort a given release will take. Just like we don’t tell our contractor how long (or how much) it will take to remodel your kitchen, the business team must let Engineering scope the effort, and time, required for a release.

 

In this multi-part blog, I will present my recommendations on how to best manage a team in this environment. I hasten to say that I have not found the perfect solution and I am still working hard at f refining it daily.

 

Three important aspects drive the management techniques:

  1. Understanding, and communicating to the Engineering team, the “Why”: why the business needs to impose both dates and features … and why this is unlikely to change materially
  2. Communicating to the business teams what they can expect from the Engineering team, and establishing “rules of engagement”
  3. Understanding, and implementing, the Agile principles that are most helpful in this environment.

The (Legitimate) Reasons for Formal Release Processes

To be clear, a continuous release process, where new features are deployed as soon as they are developed and tested is ideal. Unfortunately, this is only possible in specific environments, e.g. self-hosted web-apps, and does not work for ISVs.

 

Most ISVs which sell to enterprises need to publish their 18-24 month product roadmap. Customers don’t just buy today’s product, but also tomorrow’s. This 2-year product roadmap is most critical  for startup whose buyers accept the risk of buying from a fledgling company because of the promise the continuous stream of benefits committed in the product roadmap. “Committed” is the operative word: because of its startup status the company must meet every single one of its promises in order to maintain the fragile trust of its customers.

 

Here are some common (legitimate) reasons that prevent “continuous” release, and require formal release:

  • The software is installed on customers’ premises. In this case, it is important to “version” the code. It would be impossible to manage communications, installation, or support, if each customer installed a different version
  • The overhead of introducing new features makes it too costly to release one feature at a time. Any function can cause this overhead to be too dear – even in the case of a hosted (SaaS) web-app:
    • QA – if a bug cannot be afforded (e.g. financial applications, regulatory conditions),
    • End-user communications & training: when the usage paradigm is changed significantly, deliberate communications and user training will be necessary. Similarly, some environments have seasonality that limit the opportunities to introduce something new (e.g. schools, retail)
    • Integration with 3rd party partners, and/or customer systems: will require a phase of joint testing upon any change in our software
    • Customer release processes: even in a SaaS environment, customers (e.g. in  mission critical applications such as e-mail, and/or sensitive applications like finance) will impose their own pace of deployment, and limit the number of upgrades to 1 or 2 a year
    • Marketing: the business may be such that opportunities to communicate to customers, partners and analysts are rare and costly, and consequently, it is necessary to group features in a release to maximize the excitement about product announcement.
    • Customer commitments: As the CEO, VP of Sales, and sales team scour the country, or the globe, in search of orders, they make commitments to customers in order to win deals, as to features being available by a certain date.

 

In all the situations listed above, one could argue that there is no reason why Engineering should not be involved in setting dates before commitments are made. And the point is correct. It is in everyone’s interest to involve Engineering before making commitments. This is called the Product Roadmap process … which we will discuss in a subsequent blog

Cloud Computing – The Miracle Tool for Testing

Cloud Computing eliminates restrictions due to the number of servers in the QA lab, and thus allows concurrent testing by developers and QA engineers. By making it easy to test often, and to expose early releases to the outside world, Cloud Computing will improve product quality

Does this story rings familiar? You are in a planning meeting for the next release, and learn that in addition to supporting Oracle 11g, the product will also need to support Microsoft SQL Server 2008 (or DB2, or mySQL, or PostgreSQL). Once the typical brouhaha dies down about how complicated this will be, how the whole code will need to be ripped apart, and how much time this will take, the Director of QA turns to you and asks for a couple of additional servers for the QA lab, so that the software can be tested on the two databases in parallel; minimum of three servers: 1 for the database, 1 for our software, and 1 for the test fixtures. The following day, it’s the developer lead’s turn to ask for more servers: need at least 1 “populated” database against which the developers can test, plus another set up for the daily build, etc.  Makes perfect sense … Except that no budget has been allocated for these servers! Soon you find yourself with your beggar’s cup in the CEO’s office, explaining to him, and the CFO, why your team needs these extra servers when “you already have so many!!”

Rejoice! Here comes Cloud Computing to the rescue ..

Cloud Computing could not only eliminate the need to purchase servers for testing, but also actually radically improves your ability to test, and thus improve product quality.

Cloud Computing, such as Amazon EC2,  offers the ability to deploy (and un-deploy) software on demand. One pays “by the hour” of computing used, and storage and bandwidth consumed. This is perfect for testing (by developers and by QA): compute load varies greatly over the cycle of the day, as well as the cycles of the release.

First of all, every developer can now have his/her own test setup against which to test. There is no limitation of hardware, no begging, borrowing or stealing from your colleagues for unutilized servers. One can just deploy at will. Furthermore, there is no restriction on the number of servers. So if you need to test a four-server cluster, you don’t have to hunt around for free servers, you just do it.

Similarly the daily build can deploy to multiple test environments concurrently and thus accelerate the validation of the build.

Finally, the QA team can also test in multiple environments simultaneously, e.g. Oracle and SQL Server at the same time! This offers the potential benefit of being able to test a much larger number of deployment scenarios, than would be possible using one’s own hardware.

Naturally, leveraging a Cloud Computing infrastructure, requires new tools.

First and foremost, all the tests must be automated. While technology has created virtual servers, it has not yet inventing virtual test engineers J.  Secondly, one will have to build tools to automatically deploy, e.g. from the build environment, the new version of the software, and the test fixtures, as well as collect the results of the test runs.

One can be quite creative with the test management tools. For example, if a test setup encounters a high-severity bug, you could configure your test software to pause the test, deploy to a second environment and continue testing in the second environment. This allows you to go back to the first test setup to troubleshoot, and find the cause of the crash.

Another fascinating advantage is that you can deploy demo or beta systems at will  (assuming your deployment model allows it.), and let your sales team or prospective customers to “play with” the early release. By making it easier to expose early releases of the product to the outside world, Cloud Computing further improves the quality of your product.

Will you save money by testing in a Cloud Computing infrastructure?

Obviously the answer depends … on your usage, but also on factors like how much data you need to keep permanently in the cloud. For example you may need to permanently store a synthetic database of a million users (it would be too slow to upload it each time). You will also incur higher networking traffic.

In addition, you may not want to move all your tests to the cloud. For example, you may want to keep your stress-tests, or longevity tests in-house, since these will be running 24×7, and you may want the option of running them on bare-metal.

At the end of the day, to me the attraction of Cloud Computing for testing is that it will increase quality (in addition to reducing costs). It will allow each developer to have access to a test environment at will.  It will create an additional impetus for test automation. Cloud Computing will also allow the concurrent deployment of tests to an arbitrary number of computing environments, and make it easier to give early access to your customers. Net-net, this translates to more tests in the same amount of time with less effort. It’s all goodness.

“Dailies” Bolster Creativity

Design reviews do not simply allow me to have my design reviewed, but also give me the opportunity to inspire my team mates with my own ideas, and kickstart brainstorming discussions – thus fostering team creativity

By pure coincidence, I recently listened to a 2008 interview of Ed Catmull, cofounder of Pixar and President of Pixar and Disney Animation on the topic of “Pixar and Collective Creativity”, by Harvard Business School IdeaCast.

The interview centered on mechanisms to foster innovation – for which Pixar is so famous. Ed Catmull’s emphasizes the importance of communication at and across all levels, and he constantly encourages anyone and everyone to share their thoughts, critique, and suggestions.

To this effect, he encourages all the teams at Pixar to have dailies: meetings at the end of the day where each of the animators shows to the rest of the team their accomplishments of the day, whether complete or not. This is a vulnerable moment where one has to show work in progress, warts and all, to colleagues (there’s always a bit a competitive spirit at work) and possibly Ed himself, if he happens to drop by. Yet, it is also a great opportunity to not only stimulate suggestions from one’s colleagues on how to improve one’s own work, but also to give ideas, or kick-start a brainstorm about the project in general, and other people’s work.

To me, the concept of dailies translates naturally to design reviews in the software development world, as I blogged a few days ago. I don’t necessarily advocate for daily design reviews, but certainly for frequent ones; most importantly early on, before foundational decisions are made, so as to actually benefit from the team’s suggestions.

Ed Catmull highlights another set of benefits of design reviews that are potentially even more powerful to foster team creativity (rather than just individual creativity) than simply having my design double-checked: my own work and ideas can inspire my colleagues, and the very process of reviewing my work can also stimulate brainstorming discussions about new concepts and ideas. This is powerful stuff!

Design Review Checklist

Design Review Checklist: useful during the design, as well as during the review session

  • What problem is being solved
    • What requirements does it tie to?
  • Describe the design
    • Artifacts that document the design
    • Specific challenges faced – and how they are addressed
    • Assumptions
    • Design pattern(s) used
    • Approaches considered / rejected. Why?
    • Evidence that this design works: e.g. prototype; tests
    • Walk through prototype – if one exists
    • Known / potential limitations
    • Confirm that all requirements are met – or identify those missing (if we’re reviewing work in progress)
  • Any new technology involved?
    • Why?
    • How well has it been tested?
    • Other candidates reviewed / rejected. Why?
    • New hardware / software that needs to be purchased / licensed for Dev or QA
    • New open-source packages added to product?
  • Impact on testing / QA
    • Tests that are now obsolete
    • New test fixtures that need to be built – or migrated from Dev to QA
  • Impact on product.  For the following identify, anything new and/or anything modified
    • Code version / Build / Unit tests
    • APIs? Interfaces?
    • Classes / packages
    • DB schema
    • Error handling
    • Logging
    • Security
    • Installation
    • Provisioning / Configuration
    • Monitoring / Management console
    • Online help / tips / messages
    • Internationalization
    • Product documentation / manuals
    • Troubleshooting guides / info for tech support
    • Marketing documentation
  • Next steps
    • Can the design be simplified?
    • Location of all design documentation / code / other artifacts / info on external stuff added to product
    • Complete test scenarios
    • Next steps in implementation
    • Next steps in testing
    • When will we be sure it works?
    • Who can help?

In Favor of Design Reviews

Think holistically; code-and-test incrementally
Design reviews give a chance for your teammates to contribute – and for you to communicate the impact of your proposed implementation

Because Agile software development methodologies place relatively low emphasis on design, little has been written on design reviews.

I personally strongly believe in upfront design (see previous post), and thus design reviews.

To me the same argument can be made about the importance of design reviews as is made for pair programming – and conversely I have a hard time understanding why one would advocate pair programming and not design reviews: “two heads think better than one”.

Any “bug” that can be found during the design phase, will cost a lot less to fix, than if it is found during the implementation phase.

Furthermore, the advice of my peers is most useful to me in the early stages than when I am 95% done. It’s a lot easier to incorporate their suggestions, or explore alternatives, when no code has been written.

In summary, in my view, the best approach is to spend time upfront figuring out the design, and once I have a good idea of what I want to build, to code it using the Agile methodology. In other words: “Think holistically; code-and-test  incrementally

So when should a design review be held?

As a developer, I want to hold a design review when:

  • I need help
  • I want to confirm that I am on the right track
  • I want to double check that I have not missed anything
  • I want to communicate some assumptions that I have made that impact other components.

The design review is important not only to validate the design, but also to communicate: what I plan to do, and how it will impact others: developers, as well as testers, and even tech support, documentation, and product marketing

In the next post, I’ll publish a Checklist. Its purpose is primarily as a tool during the design process itself, to make sure that all aspects of the design have been considered. It is also useful during the design review session as a guide for the discussion. Finally, you can infer from the checklist all the people that need be informed about this design, and ultimately the implementation

In Favor of Architecture Design

An upfront architecture design phase can save a lot of time, and pain, prior to entering an Agile code development phase. Particularly, when complex requirements, high performance or new technology are involved.

Agile software development methodologies seem to dismiss architecture design, in favor of incremental development, and refactoring as needed. In my opinion, investing in upfront design not only accelerates projects, but can avoid unpleasant surprises, and painful delays. Architecture design and agile methodologies easily work hand in hand: the architecture design phases focuses explicitly on eliminating technical risk. Once the technical framework has been validated, implementation follows, applying the traditional agile methodologies

The “Agile” arguments go as follows: identify a new user story / feature, write a test that fails (but that’s required to meet the user story), write the code to pass this test (as well as all preceding ones) – repeat. In the process, keep the code as simple as possible, and since the code is simple and since you have an extensive suite of tests that validate existing functionality, refactoring is easy and fast.  While this methodology is indeed very powerful, it is not universally applicable. In addition, there are intrinsic advantages for upfront design

The main advantage of upfront design is “doing it right the first time”. By spending time upfront analyzing all the requirements, and technical challenges, and by evaluating competing approaches, one can avoid many dead-ends that one encounters when following an incremental approach. In the worst case of incremental design, one may run into a “killer” requirement near the end of the project which causes a complete refactoring of what’s been done before.

Further, even if one ends up with the right implementation in the end, one will simply save time by coming up with the “right design” the first time, and thus avoiding multiple refactoring efforts. While some issues only come up as one codes, spending sufficient time upfront will almost always eliminate unnecessary iterations.

In some cases, however, a phase solely dedicated to architecture design is almost always warranted. For example:

  • To partition a complex project in multiple components that can be handed off to a team of developers
  • To work through complex – and possibly conflicting – requirements
  • To ensure critical performance, resource utilization or scalability requirements
  • To validate the suitability of new technology that will be incorporated into the product: completeness of features, interfaces, or performance and scalability.
  • To validate with end users the usability of User Interfaces

In particular, features that impact different layers of the code (e.g. UI, business logic, database) need upfront design in order to avoid time-wasting back and forth between developers. Letting the whole team work it out is simply not efficient. A recent such project was for us to enable an application for multi-tenancy. Similarly, I have found that any project that involves clustering, fault-tolerance or high performance requires a dedicated and focused design – and validation – effort. Finally, incorporating any new technology – like an open-source package – must go through a prototyping phase: you never quite get what you expect …

By the way, an architecture design phase, should follow the principles of Agile Software development: keep it simple, use incremental milestones that demonstrate completion of a subset of requirements.

To be effective, the architecture / design phase must limit itself to what is strictly necessary, namely what motivated the design effort in the first place: e.g. functional partitioning or performance validation.  Anything that can be left to implementation must be.

Finally, the design phase must be concluded with a design review! …. More on this later.

Pair Programming – Does Anyone Do It?

Pair programming: not as efficient as individuals working on their own, but provides valuable benefits: code reviews and joint ownership of the code

I was surprised to read an article in the New York Times about Pair Programming. “For Writing Software, a Buddy System” that advocated 100% Pair Programming.

The New York Times article and Wikipedia give good definitions of pair programming – so I’ll only mention that the main idea behind this methodology is that “two heads are better than one”. While one engineer actually types in the code, the other reviews it, not only for typos, but also for all kinds of “gotcha” in the design, or the implementation.

It is a no brainer that a pair programming setup will lead to better code, written faster than with a simple programmer. The more difficult question, however, is whether this is more productive than 2 developers writing code on their own? In my estimate, No: two programmers will generate better quality code working independently that a programming pair.

I have to admit I have never actually tried pair programming with my teams – mainly because I don’t personally know anybody who has actually done it, and could have overridden my prejudice against it.

This being said, there are benefits that we can leverage from pair programming:

  • Code reviews: are definitely worthwhile. The time spent by a peer, or preferably by a technical lead, reviewing the code, is good investment against basic bugs and errors of interpretation in the spec or the design. Code reviews help ensure consistency across the product in various areas such as configuration, initialization, error handling, logging, resource management etc. Consistency leads to a higher quality product.
  • More than 1 person knowing any given piece of code: is also great practice. Not only is it good insurance should the original programmer fall under the proverbial bus, but it also helps in debugging, and gives everyone a broader picture of the whole project. It also reinforces the XP principle that the whole team owns the whole code, rather than having a set of individuals who own certain pieces of the code. This shared knowledge is particularly helpful when doing troubleshooting.  Finally, this also builds team spirit, and one always learns from the work of others.

What are your thoughts on pair programming? I would love to hear comments from people who have implemented pair programming on a production project.

MVP – Minimum Viable Product

Defining the Minimum Viable Product requires the selection of a segment of target customers and deliver the smallest critical mass of features – as early as possible – provided that you can charge a high enough price for it.

I have recently discovered, with great delight, Eric Ries’ “Startup Lessons Learned” blog , and in particular, his post about Minimum Viable Product (MVP). This is not surprising, since we are both fans of Steve Blank‘s Customer Discovery Process.

Eric’s post reminded me, how critical, yet how difficult in practice, the concept of Minimum Viable Product is.

Defining the minimum viable product correctly allows you to release products that are valuable to your customers with the minimal amount of energy and time invested – because as the name says, you have done the minimum, and yet you provide value. Said differently, if you only need to have 2 features in your product in order to sell it for $100, then you’d be crazy to spend the extra effort to add a 3rd or a 4th feature. Plus, by only delivering the minimum, you get to market fast – and hopefully beat the competition.

So why is this so difficult in practice … at least in my experience 🙂 ?

My first answer is that it is a lot easier to define the Maximum Product than it is to define the Minimum Viable Product.

Defining the Maximum Product  entails compiling a list of all the possible features that your product could possibly have: you only need to talk to a handful of customers and take good notes. Critical thinking is not required. It is easy to get consensus on the Maximum Product: More is always better. The only problem is that no company can afford the time it takes to deliver this “ideal” product. Hence this need for the MVP.

The first step in defining the MVP is the one that is most often overlooked: you first need to define the segment of your customers that you target with the new product. The segment has to be small enough to group customer with similar requirements, but large enough that your new product will generate enough revenue.

The second step is to define the theme of the product in terms of benefits (not features). One of the best tools to help define this theme is by imagining that you are putting up a huge billboard on 101 (one the main arteries of Silicon Valley) that will advertise the new product: what  does the billboard say?

The third and final step is to define the critical mass of features in the release. In this step,  ruthless time vs feature vs price trade-offs need to be made – because the question is not just “what features do our target customers absolutely need?” (this list will always be too long), but rather: “Will our customers be willing to buy the product with these  features – available at this date –  at this price? Economically, this question may have multiple correct answers. However, in practice, presented with this question, customers will often select a date in the near term, which in turn defines the minimum viable product.

Who Owns Quality? Part 5 and end

By testing early, we improve the predictability of the release, and we shorten the time to release.

Let us now turn to how our early focus on quality impacts methodology and release management.

Account for testing time in the plan

The most visible impact is that each developer must account for the time to fully test the code in his/her task estimates. It also behooves the release lead (scrum master) to remind developers to include testing time in their estimates. So each task must include: design, coding and unit tests, testing brainstorm with QA, building test fixtures, generation of test data, executing the tests … and some buffer to address whatever problems will be discovered during testing. Also remember that testing includes performance as well as functional validation.

Involve QA from day 1

Similarly, account for QA’s time starting from day 1 (not necessarily full time) in your project plans (vs planning for QA’s work to start at the QA phase). As soon as a design takes shape, QA (and developers) must figure out how to test it – and build the tools to do so.

The more innovative the design, the earlier QA needs to be involved: a new architecture, or a radically new category of features, is likely to require a radically new set of testing tools.

Finally, having QA involved at the inception of a design allows developers and QA engineers to truly team up.

Show-and-tell as you release to QA

While XP and Agile advocate writing the test code before writing the actual code, I don’t personally care, and let each developer do as he/she chooses. What IS important is that each developer proves that the code works before claiming to be done!

To this effect, I usually request a show-and-tell as a “right-of-passage” for releasing to QA. The show-and-tell goes like this:

  • QA provides a clean standard environment for the product
  • Developer installs his/her build including the new feature(s)
  • Developer demo’s core functionality and performance
  • QA engineer asks questions, and, if desired, requests additional tests to be run
  • When satisfied, QA formally accepts the feature(s)

I like to invite as many people to the Show-and-Tell, as the feature(s) warrant: at minimum, the product owner, and all the leads of the project, but there is no limit …. for major accomplishments, don’t hesitate to bring in the CEO, VP of Sales, VP of Marketing,  the receptionist (seriously), etc

This show-and-tell is a great opportunity to recognize the developers and QA engineers who made it happen. It also kills the silly arguments between Development and QA that drive me crazy, where a developer denies a bug because “…it works on my system!”

The more you test, the faster you develop

It sounds like we added a whole bunch of work during the development phase, and thus that we just caused the release to take longer.  In practice, it’s actually quite the opposite.

Testing, and bug fixing, must take place at one point or another, before the product is released. The choice is thus simple: “Pay now … or pay more later!” Either, test the code early, or, you wait until the end of the release, but at that point in time, the cost, and personal pain, of fixing the bug will be that much greater.

While one would think that the development phase will take much longer with all this testing, it actually does not change much. You gain time because testing is now done in parallel and in real-time as the code is being developed.

You may “lose” some time because you are testing performance upfront.

On the other hand, you gain significant amount of time at the end of a release, because your QA phase is now a true Quality Assurance phase rather than a bug-discovery-and-fixing phase. Having tested early, the unpredictability of this phase has been eliminated. You no longer have to fear the “show-stopper” bug that used to pop in the last days of the release.

In summary, from inception of project to actual release to the customer, you will experience significant time savings. Equally important, you will increase the predictability of your release schedule by an order of magnitude. To quote the Agile Manifesto: “Working software is the primary measure of progress”. By testing concurrently with code development, you advance the time at which software actually works, and thus the predictability of the product release date!