(Boosting) Morale in Engineering

The recent article by  Jessica McKellar titled “This Is What Impactful Engineering Leadership Looks Like”, and the question “Any suggestions on how to inspire my team?” published on Everwise, prompted me to reflect on what impacts morale in Engineering teams.

At the risk of appearing to deflect my responsibilities as a VP of Engineering, I will assert that morale in Engineering is driven primarily by company culture. Consequently, in order to boost morale, my first priority is to focus outwards and educate the company leadership on how to create a culture that fosters productivity in Engineering.

In my experience, engineers, like most people, are motivated by a sense of purpose and accomplishment. Unrealistic deadlines imposed by the business teams, or constantly changing priorities, for example, will sap the moral of any team, no matter how capable, or charismatic, its leader.

Consequently, the answer to “How do you motivate your team?” is that I first eliminate everything that demotivates them – which is at least half the battle. Then I make sure that we employ the proper tools and methodologies, so that we are efficient collectively as well as individually. Only on rare occasions, do I metaphorically stand on a soap box and deliver a rousing motivational speech”.


Does anyone really think that a professional football player needs a motivational speech before stepping on the field on Sunday? Heck no! He’s been waiting for that moment all week, all year! The rah-rah speech from the coaches or team captains that ESPN shows us, is just for the cameras. Said another way, if a player needs this pre-game sideline speech in order to go all out on the field, then he’s in the wrong business, and I certainly wouldn’t keep him on my team.

Well, it’s the same for Engineers.


This list of “morale killers” will appear to be self-evident. Yet, I see these mistakes perpetuated over and over.

  • Imposing unrealistically aggressive schedules for releases – whether on purpose or not
  • Frequent (i.e. more than every 6 months) changes to the corporate strategy that nullify the existing product roadmap
  • Asking the Engineering team for an extra-ordinary effort to deliver a feature to win a major deal, only to fail to win the deal … more than a couple of times
  • Excluding engineers from customer meetings
  • Failing to publicly recognize accomplishments – whether collective or individual

One of the most counter-productive pattern is to purposely impose an unrealistic deadline based on the illusion that it will motivate engineers to work harder than they normally do. This pattern is ill informed for the following reasons:

  • Engineers may work longer hours when required, but it is unlikely that they will produce their best work during these long hours. It could even be counter-productive if a higher proportion of bugs is introduced.
  • Sustained long hours do not foster creativity, nor attention to details
  • The most aggressive schedule is accomplished by setting a realistically aggressive schedule at the onset. Just like a sprinter has to set progressively aggressive times as the season progresses, each release schedule has to be aggressive, yet achievable.
  • Unrealistic deadlines are rarely met. As a consequence, even if the team delivers an amazing product in an incredibly short amount of time, on release day, we all feel like we failed (since we did not meet the crazy deadline). It is hard to build on top of failures.
  • On the contrary, by setting realistic deadlines, and ensuring that we hit them, we build confidence in ourselves. Furthermore, our internal partners (e.g. Marketing, Sales), as well as our customers also start trusting us and our dates. Success beckons success.


Not only are Engineers driven by success, we also care about the products we build. We want to ship products on time, we want our users to be thrilled by the product and we want the company to grow. Consequently, my only job is to remove all impediments to these fundamental motivations. I thus focus on:

  • Providing clear strategy and tactics
    • Why are we doing what we are doing (vision, product roadmap, business context) as well as what are our immediate priorities.
    • Ensure that each team member has 1 –  and only 1 – top priority
  • Expecting, and nurturing, a culture of results and forward-looking attitude. Focus on the challenge at hand, rather than laying blame.
  • Making post-mortem reviews actionable: by deciding what we will do differently, and better, next time (rather than on an exhaustive list of things we did wrong) – and following up to ensure that we do do things differently the next time around
  • Making it “our team” rather than “my team” – by encouraging collaboration and ideation from everyone, particularly when it comes to development methodology. Adoption of best practices will be all the easier that recommendations come from peers.
  • Making it easier, simpler to ship products by creating product-focused teams, and limiting meetings to those that are determined useful by the team
  • Stimulating productivity by encouraging maximum use of tools and automation … and minimum number of meetings
  • Fostering team work by encouraging, even requiring, open and timely communications (good & bad news alike). Emphasize empathic cross-team communications (e.g. “be aware that the changes I had to make to the API have subtle implications for your component …”)


In addition to removing impediments to productivity, and providing the right tools and environment for the Engineering team at large, one, naturally needs to address each individual’s motivations

  • Clarity of role: it must be made obvious to each engineer how their contribution feeds the success of the Engineering team and the company – both tactically and strategically
  • “Personalization”: understanding what drives each person in the team (technical, managerial challenges), how they prefer to communicate, their work style, etc
  • Responsibilities: ensure that everyone in the team is challenged to the best of their abilities (to the extent possible given the needs of the organization)
  • Personal rapport: team spirit is built from common aspirations, but also from one-to-one personal relationships, including with the VP of Engineering

Morale is a complex feeling that’s is not easy nurture in a team. It is much easier to destroy it, than to boost it. By removing the “morale killers” – typically originating from the company culture, one can bring a team to a level of enjoyment and productivity where only a little more effort brings a virtuous circle of improvement, when team members themselves drive further improvements.

Sprint 0 “vs” Agile

Members of my teams usually look at me funny when I state at the start of a project that we need to plan. The boldest ones may even venture: “We’re doing Agile, so we don’t need to plan”, implying that planning is synonymous to waterfall, and that planning is certainly incompatible, if not contrary to, Agile.

This is a mis-guided debate. It matters not whether planning is Agile, what really matters is whether it is a good Engineering practice, and, secondarily whether it can be blended with an Agile methodology.


The need to plan arises whenever a complex set of features needs to be developed. Typically complexity arises because this new project is dissimilar to anything we have done before, the scope is large and/or we are dealing with “new stuff” (architecture, software framework, tool, people, performance, etc.)

The name Sprint 0

“Sprint 0” designates this planning phase … because the planning takes place before Sprint 1

However, it is partially a misnomer because it is not really a Sprint: it is not structured as a Sprint (the team may not have been formed yet), and its duration is not the typical sprint duration (it takes as long as it takes).

Analogy: Let’s go Hiking

Let’s say we’re going on a 5-day hiking trip in the wilderness. Before the hike, we will look at the map of the area, and profile of our hike (e.g. identify how much elevation we’ll need to climb) so as to distribute our daily efforts evenly across the 5 days [rough scoping]. In particular, we will identify places where to get water, and places to sleep each of the 4 nights, both really important [risk areas]. In addition, I’ll coordinate with my fellow hikers who is bringing the tents, who is buying/carrying what food, etc [roles & responsibilities]. Finally, I’ll copy the schedule of the park shuttle that will bring us back to where we parked the cars [overall schedule].

This is the equivalent of Sprint 0.

Planning is NOT Synonymous to Waterfall

The fundamental difference between a Sprint 0 plan and a Waterfall plan is that Sprint 0 plans JUST ENOUGH to eliminate risk, versus preparing a complete design and exhaustive task schedule.

Sprint 0 wants to eliminate surprises, such as unnecessary refactoring (e.g. because the UI team and the mid-tier teams have a different vision on how to build the API). The fact that both participate in the same daily scrum does not necessarily expose these differences.

The purpose of Sprint 0 is to, almost literally, identify “the lay of the land”: key features, roles, and major risk factors.

This plan leaves plenty of room to be Agile: going back to the hiking analogy: on day 2, we can decide that we’ll walk extra hard on day 3, so that we can stay 2 days at campsite 4 which is on the shores of a beautiful lake.  We could even decide to extend the trip by 1 day … as long as we ration our food accordingly.

The plan does not dictate at what time we get up, who cooks on what day, or what activities we’ll do on our “relax day” fishing, swimming, playing Frisbee, .… But the plan highlights a “relax day”, and thus the need to bring Frisbee, or fishing rods.

In addition, the plan sets yardsticks along the way so that we can measure our progress against our overall objective. For example, we’d better make sure that by day 3, we are past the mid-way point, if we want to finish our trip on day 5.


The first activity of Sprint 0 is to review the main deliverables, both external (features) and internal (deploying a new framework, technical debt, performance). We also want to identify risks that could impact the technical solution or the schedule. It could be as trivial as having to finish a set of user stories before the key team member goes on vacation, or as complex as demonstrating that adding a cache does increase performance 10x.

We plan to a level of detail that gives us confidence that our design approach is solid and our schedule is realistic. How realistic depends on the needs of the business. Some companies commit to releases at a given date, others are fully agile.

Sprint 0 Deliverables

  • In InfoQ’s article: What is Sprint Zero? Why was it Introduced?one of the contributors: Mark Woyna, “uses Iteration Zero as a spike”:
  • “The planning team is responsible for producing 3 deliverables by the end of the planning iteration:
  • A list of all prioritized features/stories with estimates
  • A release plan that assigns each feature/story to an iteration/sprint
  • A high-level application architecture, i.e. how the features will likely be implemented”

To which, I add:

  • Design documentation relevant to the project: e.g. Interaction diagrams, Entity/Object definitions, APIs
  • A list of risks to monitor during the project: e.g. dependencies on external factors, critical results (e.g. validation of a new framework, or performance metric)
  • Detailed user stories for Sprint 1 – so that we can start Sprint 1 in earnest at the end of Sprint 0

Plan to a Judicious Level of Details

Contrary to Waterfall practices, we don’t make all the decisions during Sprint 0, we make the minimum number of decisions necessary to “eliminate risk”.

Obviously, risk is only completely eliminated when the project is complete, but in most projects there are some critical decisions that reduce risk significantly. For example, writing out the interaction diagrams for the major use case. This exposes the core assumptions about the main objects in the system and their responsibilities, clarifies whether interactions are synchronous or async, what message broker we use, etc. The whole point is that hashing out disagreements over a diagram is a lot more efficient, and less costly, that doing it once code has already been written.


It’s Waterfallish

Scrum Methodology states “the common dysfunction called “Sprint Zero” is actually a contradiction in terms. Companies (and misinformed consultants and trainers) use this as a way to avoid changing waterfall habits.”

This argument totally misses the point – which is to have sound Engineering practices. Slapping a “Agile vs Waterfall litmus test” does not inform the discussion as to whether this particular practice is sound engineering.

Does Not Deliver Value to the Customer

The article from Scrum Alliance: What is Sprint Zero? presents “Scrum believes that every sprint should deliver potentially usable value (… by the customer)”.

The fact that Sprint 0 does not deliver value to the customer at the end of the Sprint is a myopic argument, which misses the more important benefit of Sprint 0, namely, that it improves the velocity of the team for all the other Sprints. Because we lay out a high-level path to success in Sprint 0, we walk a straighter, and faster line, during the remainder of the Sprints. Equally important, we avoid “critical failures”: where a significant portion of the code needs to be refactored because we incremented our way into a design rather than taking the time to think it through.

Another way of saying this is that Sprint 0 brings value to us, the team, by providing better visibility to the whole project. We “return” this value to the customer, by being more efficient and faster overall.


When thinking about Engineering best practices, let us not corner ourselves into debating labels, e.g. Agile vs Waterfall. To me it simply makes sense to take time to reflect, think and plan before embarking on a complex project to:

  • Evaluate key features to be implemented
  • Agree on key design and architecture designs, such as entities, APIs, protocols
  • Identify risk areas: schedule, resources, technology, performance
  • Map out tasks over time (a) to ensure that the project will be completed in a time frame commensurate with needs of the business and (b) set up yardsticks against which to calibrate our progress in the future
  • Lay groundwork for Sprint 1.

Basics of Performance Testing

At the risk of sounding harsh, I cannot remember the last time I interviewed a QA engineer who could clearly explain to me  what a Performance Test is. Even more worrisome, when they described how they usually run performance test, they could only describe the mechanics of the test (“I ran JMeter”). More often than not, the tests were wrong, and, in any case, they could not interpret the results (largely because they did not know what numbers to look at).

So here is a basic description of what a performance test is and the important steps to follow – as well as one of the most common mistakes.

Note for illustration purposes, I’ll use the term site (e.g. retail e-commerce site), but this post applies identically to SaaS services.

Typical Performance Test

The primary performance test that engineers want to run on a product is: “How many users will our product support?”. This typically translates to “How many concurrent users can the current code and infrastructure support with acceptable response time?”

Another performance question is to determine how many total users can be supported. This problem is rarer, and we won’t cover it here.

The first challenge is: “what is a concurrent user?” While the answer is intuitively obvious: “the number of people using the site at the same time”, we still need to define what “at the same time” means. In particular, it does NOT mean “exactly” at the same time, i.e. within the same micro-second.

This incorrect “within the same microsecond” interpretation actually leads to the most common error in performance testing, where testers set up N JMeter clients (where N is the target number of concurrent users), launch them at the same time, and measure the time it takes until the last clients receives a response. This test does not measure concurrent users – in real life, users arrive randomly on a site, not at the exact same time. This is illustrated in the two figures below. Both represent  1,000 users using the site during a period of one hour. The “Burst” figure illustrates the “bad” test when all 1,000 users hit the same in the first minute (and then no activity in the remaining 59 minutes). While the correct scenario is likely to be closer to the second picture, where during each of the 60 minutes an average of 16.66 users hit the site (1,000 users / 60 minutes). However, in any given minute the number of users can vary between 5 and 25 (for example).


Another way of expressing that a site has 1,000 visitors  per hour, is that a new visitor hits the site every 3.6 sec (3,600 seconds per hour / 1,000). Consequently, we can program JMeter to hit the site following a pseudo-random sequence of mean 3.6 sec and standard deviation 1 second (e.g.).

Depending on the length of our standard visitor visit, we will need to deploy multiple clients. For example, if each visit simulation takes a minute, we will need an average of 16.6 clients (60 sec per minute / 3.6 sec) to perform the simulation

How Many Concurrent Users to Use?

The excellent article: How do you find the peak concurrent users on your site? describes in details how we can find the peak number of visits during a given hour on the site (for performance testing, we care about visits – rather than visitors)

Of course, the key is to know our site well enough to identify the actual day(s) and hour(s) when we have the peak traffic.

However, we cannot stop there. The purpose of the test is to give us confidence that our site will be able to handle the load in the future. So the historic peak number of visits needs to be interpolated for the anticipated lifetime of the release we are certifying. E.g. if we want to be ready for the upcoming Black Monday, we need to take last year’s numbers and increase them based on the projected traffic from Sales & Marketing … plus an extra 25% as a safety buffer.

What is a Visit? What User Actions to Execute?

Now that we have figured out, how many visits we need to run, and how often, we need to figure out “what’s a visit”? There are 2 principal ways to answer the question:

  • a visit is a isolated user action
  • a visit is the simulation of an actual visit of a typical user.

Testing the Performance of a Isolated User Action

While we assume, and hope, that the visitors on our site don’t limit themselves to a single action when they visit, we may still want to simulate the performance of a single action under 2 generic conditions

  • this is new functionality and we want to ensure that it is fast enough
  • as part of performance regression testing – we want to ensure that the new release is no worse than the past releases

The question now is, what isolated user actions should we test? Here are some examples:

  • Login / Home Page / Landing Page: the 1st page that is loaded when a  user arrives on the site. It is important for it to be fast, since it is the “first impression” of the site for the user – and – it usually requires a lot of work server-side, since, by definition, we have no (or little – in the case of a returning user) cached data, so a lot of data needs to be computed or refreshed
  • All pages related to checkout and payment: don’t want to lose a purchase because the site is too slow
  • Pages which Analytics in Production identify as slow
  • New Pages / heavily re-factored pages that are content- and/or compute-heavy

Testing the Performance of a Typical Visit

A “typical visit” can be constructed either “synthetically”, or “by replay”.

A synthetic visit is one that we build based on the nature of the site and statistics gleaned from the Production system – such as the average number of pages visited. A typical visit for a retail site would entail: login – select a category – then a sub-category – browse a couple pages – select an item – checkout and pay.

A “replay” visit is based on tracking the actual navigation of a random user on the site (e.g. using log files).

It is important to note that I should be using the plural – we need to identify multiple “typical visits”. For example, in the example above, a second typical visit would involve a couple of searches rather than browsing by category/sub-categoy.

We can also breakdown our visits to focus on specific sections or functionality of the site: search, browsing, check-out – this makes it easier to interpret the results

Let’s Not Forget Background Load!

In order for our performance test to be realistic and representative of Production, we need to simulate all the types of traffic taking place on the site – qualitatively and quantitatively – during our experiment – because each type of traffic consumes shared resources differently – whether it’s cache, database resources, access to disk, etc.

What I call “background load”, is a simulation of the typical traffic on our site. The best way to simulate it is to record it and replay it – either using a proxy server, log files, or other tools – e.g. Improving testing by using real traffic from production

One exception: If we want to characterize the performance of a specific code path – and/or benchmark it against previous versions – then we should not run any background load

How Long Should We Run the Test?

When thinking about the timing of a performance test, we first need to address the “ramp-up” or “warm-up” phase. When we launch our performance test, our system also starts “cold”: typically it has an empty cache, a small number of threads running, a few database connections set up, etc – so to obtain a realistic measurement of performance we need to wait for the system to reach its steady-state. It’s not always straightforward to tell when the system reaches steady-state, but we can get clues from monitoring critical system parameters: CPU, RAM, I/O on key servers (e.g database). Also, response time should stabilize.

Secondly, because the simulated visitors access the system in a random fashion, we need to take our measurements over an extended period of time, so as to smooth out the randomness. An easy way to figure out the exact duration is to experiment: if the results we get after 10 minutes are the same – over 10 experiments – as those that we get over 1 hour – then 10 minutes is enough.

How Many Tests Do We Run?

Depending on the test, and the environment,  we may have to run it multiple times. For example, if we are running on AWS, even if we are running on reserved instances, the performance may vary between runs, depending on network traffic, load from other tenants of the physical servers, etc.

Interpreting the Results

We can look at the results in at least 2 different ways:

1/ In absolute: does the response time meet our target?

2/ Relatively to prior releases: is our response time no worse in this release than in prior releases

Performance Hygiene

Ideally, performance tests are automated and run regularly – at least after each Sprint. This allows us to catch performance regressions early.

Conversely, running performance tests after “code complete” is almost a waste of time, since this leaves us no time to remedy any serious performance issue, and thus puts us in a quandary: release a slow product on time vs release a good product late?

Common Mistake: Initial Conditions

Finally, we need to address a very common mistake, namely ignoring initial conditions.

To ground the discussion, let me give an example: we can all agree that the query “select * from TableA” will execute much faster if TableA has 1 row vs if TableA has 100M rows.

The same applies to performance testing. Just as we let the system reach steady-state before we start measuring performance, we also need to ensure that all assets impacting performance are fully loaded.

To be more specific, let’s say today is March 15, and I am working on the release that will be in Production for Black Friday in November. In order to have a meaningful performance test, I need to make sure I load my E-commerce database not just with the same number of customers that we have today (March 15), but with the projected number of customers we’ll have by the end of November! Similarly, with the number of products in the catalog, documents in the search database, etc.

This is a complex – but absolutely critical – step. Otherwise, our tests will tell us about the past, not the future.

The final consideration is to ensure that the initial conditions – as well as the test – are 100% reproducible. This means that the “initial conditions” need to be exactly reproducible: e.g. by restoring from a previously archived database, by using the same pseudo-random sequence to trigger visits, the same logs to replay background traffic, etc. Otherwise, we are simply running a different test each time. As a consequence, any benchmarking would be meaningless.


Performance testing is complex, and requires a lot of thought, careful planning and detailed work to produce results that are meaningful. Specifically, we need to:

  • Model one or more individual visit profiles constructed from traffic patterns on our site / in our service
  • Model visit rate based on our site’s Analytics and interpolate it to projected level during the lifetime of the release
  • Generate pseudo-random sequences to model users’ arrival on the site
  • Generate a model of background load and/or a mixture of individual visits that together are a good approximation of actual traffic
  • Make sure to give the site/service time to warm-up (i.e. reach steady state) before starting measurements, and run the test long enough to smooth out the pseudo-random patterns. Also run multiple tests if environmental conditions cannot be fully controlled
  • Finally, make sure to properly initialize the whole system – in a reproducible fashion – so as to account for all the data already present in the system.
  • Finally finally, ensure that all tests conditions are reproducible, and tests are automated so that they can be run regularly – preferably upon the completion of each sprint. This ensures that performance bugs are caught early, rather than 2 weeks before the target release date.

How to Prioritize New Features vs Bug Fixes

The most lively debates that I regularly encounter leading an Engineering team revolves around the allocation of resources between bug fixing and the development of new features: “Why doesn’t Engineering fix all the bugs?” exclaims a customer support person – “Why don’t we allocate all Engineering resources to New_Shiny_Feature_X?” wonders the salesperson whose major deal depends on this feature.

These are both absolutely legitimate questions! … It does not mean that their answer is easy.

The main challenge in satisfying these two rightful requests is that they compete for the same resources, and that different people within the company have strongly-held different perspectives. The same person can even switch camps in a matter of days. It all depends on the last sales call. Do we have a customer threatening not to renew until we fix “their bugs”, or do have a big deal pending on the delivery of a new feature?

As a consequence, it is imperative to create a business and technical framework that leads to decision making, where every stakeholder can not only express their perspective but also be satisfied about the decision process and thus about the decisions that come out of this process.

Framework for Decision Making

What’s more important? Or more precisely, what’s more important to implement in this release cycle?

  • New features driven by product roadmap and corporate strategy
  • Customer-driven enhancement requests
  • Bug fixes requested by existing customers
  • Paying down technical debt: upgrade architecture, refactor ugly code, optimize operational infrastructure, etc

The process to reach a decision is basically the same as for any business decision: we weigh how much income each item will generate and how much investment it will require.

Implementing a new feature, fixing a bug, enhancing a released feature or paying down technical debt demand the same activities: define requirements, design, code, test, deploy. They also all draw from the same pool of product managers, developers, QA and DevOps engineers. As a consequence, it is relatively easy to define the “investment” side of the equation.

Estimating the income side is a bit more complex, because it comes in multiple flavors. However, the process is the same as prioritizing the backlog of new features: we need to articulate the business case:

  • Expected revenue stream (new features & enhancements)
  • Reduction in subscription churn (enhancements & bug fixes, as well as new features)
  • Cost reduction (technical debt / architecture) through increased future development velocity
  • Customer satisfaction (bug & enhancements) which translates in better advocacy for the brand and churn reduction
  • Strategic objectives (market positioning, competitive move, commitment to win a major deal)

Each of these categories is important in its own right. Since they cannot all be translated into a common unit of measure (e.g. dollars), I recommend quantifying each of these elements relative to one another (e.g. using T-shirt sizes: S, M, L, XL, …) for each item on the list.

Practically, I create a matrix with rows listing each feature, bug, enhancement request, technical debt, and the following columns:

  • Short Description
  • Link to longer description (Jira, Wiki, …)
  • Summary business case
  • Estimated engineering effort
  • Estimated calendar duration
  • Expected increase in revenue (if any)
  • Expected cost reduction (if any)
  • Customer satisfaction impact
  • Strategic value

While this is not perfect – ideally we’d want to assign a single score for each item – this allows to (a) resolve the no-brainers (high-benefits at a low-cost or high-cost and low-benefits) (b) frame the discussion for the remainder against the business context of the company:

  • Are we in a tight competitive race where we need to show momentum in our innovation?
  • Do we have one, or more, major deals dependent on a given set of features?
  • Are our customers grumbling about our product quality, or worse threatening to leave?
  • Is our scalability at risk because of legacy code?
  • Are we being hampered in our ability to deliver new features by too much legacy code?

While this will not eliminate passionate debates at Product Council, it will hopefully bound them, particularly if we can first agree on high-level priorities for the business.

Why Not Have a Dedicated Sustaining Engineering Team?

There are two primary reasons why a Sustaining Engineering team is a bad idea: first, it “does not answer the question” of prioritization, and secondly, it is a bad practice as it creates a class of “second-class citizens” engineers.

Say you want to have a Sustaining Engineering team. How large should it be? 5%, 10%, 20%, 50% of all engineering? Why? Should its size remain constant? Or are we allowed to shift resources in and out depending on business priorities? Answering these questions requires the same analysis and decision making as I propose above, but is burdened by the inflexibility of a split organizations

Regardless of whom you assign to Sustaining Engineering, these engineers will be considered second-class by the self-proclaimed hotshots who get to work on new features. Worse, it promotes laziness with respect to quality from the “new feature team”: they know that Sustaining will clean whatever mess they leave. It is pervasive, and over time can even lead to cherry-picking of work, which means that Sustaining ends up completing the “new feature” work. For example, the “new feature” team releases a new product on Chrome (so that they can meet “their date”), but Sustaining gets to make it work on Internet Explorer.

A Useful Best Practice

Any bug older than 12 months, should be removed from the bug backlog. They should either be marked as “Won’t Fix”, or assigned to a secondary backlog list (which, I predict, will never be reviewed). The justification is simple: if a given bug has lived through a year’s worth of bug triages without rising to the top and being fixed, then it is almost certain that it will never be prioritized for resolution. Better to put it out of its misery. Furthermore, this will keep the bug backlog to a reasonable size and bug triage a manageable task. Finally, if for some reason, the visibility of this bug raises anew, it can be returned to the active backlog.


The adage “Software always has bugs” remains true, not because it is impossible to write perfect software (I argue that this IS possible), but rather because in a business context, quality is not an end in-and-of-itself. Don’t get me wrong high-quality is critical, but fixing ALL the bugs is not a requirement for business success.

As a consequence, only 1 criterion matters: “what moves the business forward the most effectively?”

Typically this means making customers happy. There are times when customers are happier if we fix bugs, at other times they prefer to see a new feature brought to market earlier. The answer depends on what drives their business. Do they prefer that we fix a bug that costs them an extra hour of work per day or that we launch a new feature that will allow them to grow their business by 10% in 6 months?

Notes from SF Data Mining Meetup: Recommendation Engines

Excellent talks on each of the presenting companies approach the design of their recommendation engines based on the specifics of their markets and users


Here are my notes on their respective technology stacks. Hadoop, Hive, Memcached, Java are used by all 3.

1. Trulia: Todd Holloway on Trulia Suggest.

  • Hadoop
  • Hive
  • R on each Hadoop Server
  • Memcached
  • Java

2. Rich Relevance: John Jensen and Mike Sherman

  • Hadoop
  • Hive
  • Pig
  • Crunch

Starting to deploy

  • Kafka
  • Storm

3. Pandora: Eric Bieschke

  • Python. Hadoop. Hive for  Offline processing
  • Memcached. Reddis: for near line & online
  • Java & PostgreSQL for online

Memcached: Used as key-value store in the sky  as long as you don’t care about losing data

Reddis: “Persistent Memcached”

Scalable Software Architecture for a Startup

Say we are the founders of a startup and we just got a big fat check for our A-round funding. The VCs love our idea, and we all know that our app will attract millions of users in no time. This means that from day one we architect for millions of page-views per day…

But wait … do we really need to deploy Hadoop now? Do we need to design for geographical redundancy now? OR should we just build something that’s going to take us through the next 3 months, so that we can focus our energy on customer development and fine-tuning our product features? …

This is a dilemma that most startups face.

Architecting for Scale

The main argument for architecting for scale from the get-go is akin to: “do it right the first time”: we know that lots of users will be using our app, so we want to be ready when they come, and we certainly don’t want the site going down just as our product catches fire.

In addition, for those of us who have been through the pain of a complete rewrite, a rewrite is something we want to avoid at all costs: it is a complex task that is fun under the right circumstances, but very painful under time pressure, e.g. when the current version of the product is breaking under load, and we risk turning away customers, potentially for ever.

On a more modest level, working on big complex problems keeps the engineering team motivated, and working on bleeding or leading edge technology makes it easier to attract talent.

Keeping It Simple

On the other hand, keeping the technology as simple as possible allows the engineering team to be responsive to the product team during the customer development phase. If you believe, as I do, one of Steve Blank’s principles of customer development: “No Business Plan Survives First Contact with Customers”, then you need to prepare for its corollary namely: “no initial product roadmap survives first contact with customers”. Said differently, attempting to optimize the product for scale until the company has reached clear validation of its business assumptions, and product roadmap, is premature.

On the contrary, the most important qualities that are needed from the Engineering team in the early stages of the company are velocity and adaptability. Velocity, in order to reduce time-to-market, and adaptability, so that the team can rapidly adapt to feedback from “outside the building”.

Spending time designing and implementing a scalable architecture is time that is Not spent responding to customer needs. Similarly, having built a complex system makes it more difficult to adapt to changes.

Worst of all, the investment in early optimization may be all for naught: as the product evolves with customer feedback, so do the scalability constraints.

Case Study: Cloudtalk

I lived through such an example at Cloudtalk. Cloudtalk is designed as a social communication platform with emphasis on voice. The first 2 products “Cloudtalk” and “Let’s Talk” are mobile apps that implement various flavors of group messaging with voice (as well as text and other media). Predicint rapid success, Cloudtalk was designed around the highly scalable noSQL database Cassandra.

I came on board to launch “Just Sayin”, another mobile app that runs on the same backend (very astute design). Just Sayin is targeted to celebrities and allows them to cross-post voice messages to Twitter and Facebook. One of my initial tasks coming on board was to scale the app, and it was suggested that we needed it to move it to Amazon Web Services so that we can scale rapidly as more celebrities (such as Ricky Gervais) adopt our product. However, a quick analysis revealed that unlike the first two products (Let’s Talk and Cloudtalk), Just Sayin’ impact on the database was relatively light, because communications were 1-to-many (e.g. Lady Gaga to her 10M fans). Rather, in order to scale, we first needed a Content Delivery Network (CDN) so that we could feed the millions of fans the messages from their celebrities with low response time.

Furthermore, while Cassandra is a great product, it was somewhat immature at the time (stability, management tools) and consequently slowed down our development. It also took us a long time to train new engineers.

While Cassandra will have been a good choice in the long run, we would have been better served in the formative stages of the company to use more established technology like mySQL. Our velocity in developing new features, and our ability to respond to changes in product strategy would have been significantly faster.

Architecting for Scale is a Process, not an Event

A startup needs to earn the right to design for scale, by first proving that it has found a legitimate market. During this first phase adaptability and velocity are its most important attributes.

This being said, we also need to anticipate that we will need to scale the system at some point. Here is how I like to approach the problem:

  • First of all, scaling is an on-going process. Even if traffic increases dramatically over a short period of time, not all parts of the system need to be scaled at the same time. Yet, as usage increases, it is likely that any point in time, some part of the system will need to be scaled.
  • In order to avoid complete rewrites of the system, we need to break it into independent components. This allows us to redesign each component independently, and have different teams work on different problems concurrently. As a consequence, good modularization of the system is much more important early on, than designing for scale
  • Every release cycle needs to budget time and resources for redesign – including both modularization and scalability. This is just like maintenance on the Golden Gate bridge: the painters are always working; when they finish at one end, they start all over at the other end.
  • We need to treat our software architecture the same way, and budget maintenance work every release cycle: dollars, time, people. CEOs have to be trained to not only think about the “shiny features” – those that are customer-facing – but also about the “continuous improvements” of the architecture that has to be factored in every release cycle.
  • We also need to instrument the code to tell us were it is under strain. Unlike the Golden Gate bridge, we can’t always see where it’s breaking, or even rationalize it. Scaling sometimes works in mysterious ways that are not always obvious to predict.


In summary, designing for scale is a high-class problem, on which we only get to work once we have demonstrated true demand for our product. During this first phase, velocity and adaptability are critical, and are better served with well-understood technologies, and a well modularized design. Once our product reaches an adoption phase, then designing for scale is a continuous process that hopefully can be focused on individual modules in turn – guided by proper instrumentation of the code


QA does not stop in QA

Quality Assurance does not stop after the software receives the “thumbs up” from the QA team. QA must continue while the product is Live! … because QA is not perfect, and real users only exist on a Production system. We need to be humble and accept that our design, development and quality processes will not catch all the issues. Consequently, we must equip ourselves with tools that will allow us to catch these problems in Production as early as possible … rather than “wait for the phone to ring”

When the product exits QA, it simply means that we have we’ve run out of ideas on how to make the system fail. Unfortunately, this does not imply that the system, once in Production, will not fail. If we are successful and get a high volume of traffic, the simple law of large numbers guarantees that our users will find yet-never-thought-of ways to – unintentionally – make the system fail. These are part of the “unknown unknowns” as Mr. Donald Rumsfeld would say. Deploying the product on the production servers, and handing-off (abdicating?) the responsibility to keeping it up to the Ops team shows wishful thinking or naïveté, or both.

Why QA must continue in Production

There are a few categories of issues that one needs to anticipate in Production:

  • Functional defects: in essence, bugs that neither developers, nor QA caught – while this is the obvious category that comes to mind, it is far from being the only source of issues
  • User experience (UX) defects: Product works “as spec’d”, but users either can’t figure how to make the product work, or don’t like it. A typical example is a high abandon rate in a purchasing experience, or any kind of work flow, or a feature that’s never used, a button that’s never clicked.
    This is not reserved to new products, by improving the layout of a given page, we may have broken another feature on that same page
  • Performance issues: while we may have run performance, and load tests, in our QA environments, the real world always offers surprises. Furthermore, if we are lucky enough to have the kind of traffic that Google or Facebook have, there is no other way but to test and fine-tune performance in production
    Running tests on non-production systems requires to not only simulate the load of the system, but also to simulate the “weight” of existing data (e.g. in database, file system) as well as longevity to ensure that there is no resource leak (memory, threads, etc)
  • Operational issues: while all cloud applications are typically clustered for high-availability, there are other sources of failure than equipment failure:
  • External resources, such as partners, data feeds, can fail, or have bugs of their own, or simply not keep up their response time. Sometimes, the partner updates the API without notification.
  • User-provided data can be mal-formed, or in an unexpected format, or a new data format can be introduced after the launch of the product
  • System resources can be consumed at an unexpected rate. Databases are notorious for having non-linear response times based on load: as long as the load is under a given threshold response time is high, but once the load exceeds this threshold response time can deteriorate very rapidly.


A couple of examples:

  • At my previous company, weeks after the product had been launched, we started receiving occasional complaints that some of the user-created videos were not showing up in their timeline. After (reluctantly) poking around in our log files, we did find out that about 10% of the videos that had been uploaded to our site for the past 2 weeks (but not earlier) were not processed properly. Our transcoder simply failed. Worse, it failed silently. The root cause was a minor modification to the video format introduced by Apple after our product was released. Since this failure was occurring for a small fraction of our users, and we had no “operational instrumentation” in our code, it took us a long time to even become aware of it.
  • Recently, we launched a product that exchanges data with our partner. Their API is well documented, and we tested our product in their sandbox environment, as well as their production environment. However, after launch, we had reports of occasional failures. It turns out that users on our partner’s site were modifying the data in ways that we did not expect, and causing the API to return error codes that we had never seen. Our code duly logged this problem each time it occurred in our log files … among the thousands of other log events generated every minute


Performing QA on Production Systems

As I mentioned, the Google and Facebook of the world, do a lot (if not most) of their QA on Production systems. Because they run hundreds of thousands of servers, they can use a small subset to run tests will live user data. This is clearly a fantastic option.

Similarly, “A/B comparisons” techniques are typically used in Marketing to compare 2 different user experiences, where the outcome (e.g. a purchase) can be measured. The same technique can be applied in testing, e.g. to validate that a fix of an intermittent bug difficult to reproduce does work.


More generally, Production code needs to be instrumented:

  • To detect failures, or QoS (Quality of Service) degradations, with internal causes (e.g. database is slowing down)
  • To detect failures, or QoS degradations, with external causes (e.g. partner API times out a lot)
  • To monitor resource utilization for each service or application – at a finer grain than provided by Operations monitoring tools which are typically at the server level.

The point is that if a user can’t buy a book on our website because our servers crash under load – this is a bug. While the crash is not due to code written incorrectly, it is due to the absence of code warning us that the system was running out of steam … this is still a bug.


In order to monitor quality in Production, we need to:

  • Clean up the code that writes to log files: eliminate all logging used for code testing, or statements such as “the code should never reach here”. Instead, write messages that will be meaningful to the poor soul who, a few weeks later, will be poring over megabytes of log files on a Sunday night trying to figure out why the system crashed
  • Ensure that log messages have consistent severity levels (e.g. as recommended by RFC 5424Wikipedia has a nice table), so that meaningful alerts can be triggered
  • Use a log aggregation system, like GrayLog2 (open source), so log files from multiple nodes in the same cluster, as well as nodes from different services can (a) be searched from a console and (b) viewed, time-aligned, on a single page (critical for troubleshooting). GrayLog2 can handle hundreds of millions of log events and terabytes of data.
  • MEASURE: establish a base line for response time, resources consumption, errors – and trigger alerts when the metrics deviate from the baseline beyond a predetermined threshold
  • Track that core functions – from a user perspective – complete, and log when, and ideally, why, they fail along with key parameters. E.g.: are users able to upload files to our system, are failures related to file size, time of day, location of user, etc?
  • Log UX and operationally meaningful events to track how users actually use the system, what features are most used and track them over time. These metrics are critical for the Product Management team
  • Monitor resource utilization and correlate with usage patterns. Quantify key usage parameters in order to scale the right resources in advance of the demand. For example, as traffic grows, the media server and the database servers may grow at the different rates.
  • Integrate alarms from application errors into the Ops monitoring tools: e.g. too many “can’t connect” errors should trigger an Ops alert that our partner is down – slow response time on a single server in a cluster may indicate the disk is failing


Quality is not a one-time event, it is an everyday activity, because users change their behaviors, partners change their APIs, systems get full and slow down. What used to work yesterday, may not work today, or no longer be good enough for our customers. As a consequence, the concept the “test driven” development must be extended to the Production systems, and our code must be instrumented to provide metrics that confirm that everything works as desired, and alerts when they don’t. But that’s not sufficient, developers and QA engineers must also take the time to look at the data, not just when a fire drill has been called, but also on a regular basis to understand how the system is being used, and how resources are consumed as the system scales, and apply this knowledge to subsequent releases.

Migrating a Self-hosted Architecture to the Cloud

While it may possible to migrate a self-hosted architecture to the cloud with servers in identical configuration, it almost certainly will lead to a sub-optimal architecture in terms of performance, and higher costs, in some cases prohibitively so.

The common objectives for moving to the cloud are:

  • Ability to scale transparently as the business grows
  • Reduce costs
  • Benefit from a word-class IT infrastructure without having to hire the talent


We’ll focus on the first two objectives as the third one is achieved – by nature – the moment you flip the switch to the cloud.

Memory Drives Pricing in the Cloud – not CPU

in the cloud, whether with Amazon EC2 or other vendors, the primary dimension driving pricing is the amount of memory (RAM) available in the server. In addition, CPU allocated is roughly proportional to the amount of RAM.

For example, as of this writing, per the Amazon EC2 pricing and the Amazon EC2 Instance Types definitions:

  • A Small instance has (only) 1.7 GB of RAM – and 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit) and costs: $0.08 per hour On-Demand
  • An Extra Large instance is 8 times bigger than a small instance and costs 8 times as much: 15 GB of RAM – and 8 EC2 Compute Unit and costs: $0.64 per hour On-Demand
  • In order to get 32 GB of RAM, one needs to move to the High-Memory Double Extra Large Instance aka m2.2xlarge: $0.90 per hour On-Demand

Note that the prices quoted here are for US East (N. Virginia) zone. Prices for US West (Northern California) are more expensive (about 12% based on a few data points I correlated)

Database Servers

Database servers have unique requirements:

  • They require fast I/O to disk. While Amazon recommends using networked storage, this is typically not practical from a performance perspective.
  • So database servers also require large local disks: to hold the data
  • Most databases require a fair amount of memory at least 16 GB. We use Cassandra, and they recommend 32 GB per server.
  • They hate noisy neighbors (see previous blos). While virtualization technology does a fairly good job at partitioning CPU and RAM, it does a much poorer job at sharing I/O bandwidth. Having another virtual machine running on your database server can kill its performance, even if the neighbor does not do much, it can kill the I/O efficiency. All the tricks that databases use to optimize I/O performance assume that the database is in control of all I/O busses.

As a consequence, one should first of all use a reserved instance – simply because the cost of getting data in and out of the local disks makes it impossible to set-up / tear-down database servers “at will”.

Secondly, one should buy a large enough instance (e.g. m2.4xlarge) so that we are the only tenant on the server. This will cost $7,203 per year – based on Heavy Utilization Reserved Instances pricing, and get us 68.4 GB memory, 4 cores (8 virtual – with Intel Hyper-Threading) and 1.69 TB of local storage.


As Adrian Cockcroft from Netflix illustrates in his detailed post: Benchmarking High Performance I/O with SSD for Cassandra on AWS, moving to SSD instances for I/O and compute intensive systems can bring significant cost reductions. In his example, he compares a traditional system with 36 x m2.xlarge + 48 x m2.4xlarge instances at a cost of $772,806 (Total 3 Year Heavy Use Cost) – with a 15 x hi1.4xlarge system at a cost of $354,405 – a 54% savings.

As the article illustrates, selecting one versus the other requires careful understanding of the computational profile of the application, and some changes in the application’s architecture

Do I want to use Proprietary Amazon Solutions?

Following the logic that motivates us to move to the Cloud forces to consider using Amazon proprietary solutions: reduce need for sys admin talent, leverage out-of-the-box a scalable high-availability solution, etc

  • Should I replace mySQL with RDS? Or Casssandra, HBase with DynamoDB?
  • Should I replace my ActiveMQ (e.g.) message queue with SQS?
  • … and similarly for AWS many products


These are excellent products, battle tested by Amazon. However, there are 2 very important considerations to examine:

  • First, these products are obviously proprietary – making a move to another cloud provider like Rackspace or Joyent, will be take an extensive code rewrite. This may turn out to be impractical.
  • Secondly, cost can be a (bad) surprise once the application is deployed live. For both RDS or SQS, pricing is driven by data bandwidth AND the number of operations performed using the service – which requires careful analysis to estimate ahead of time. For example, polling every 10 seconds to check whether new data is present in SQS generates 250K operations per month (assuming each check requires only 1 operation). This is fine if this function is performed by a few servers, but would break the bank if it’s performed by 100,000 end-user clients. This adds up to $25,000 ($0.000001 per Request).

Algorithm Tuning and Server Selection

More generally, Amazon offers seven families of servers: Standard, Micro, High-Memory, High-CPU, Cluster Compute, Cluster GPU, and High I/O (SSD). Porting an existing application will thus require an iterative process evaluating the following questions:

  • How do I best match each of my system’s components with an Amazon instance types>
  • Can I fine-tune, or even re-write, my algorithms to maximize RAM & CPU utilization? In particular, would I make the same memory vs computation trade-offs? Do I need this hash-table, or can I re-compute the query?
  • How does my architecture evolve as I scale out? For example, do I need to replicate shared resources – like caches – or will sharding (e.g.) avoid this duplication of data – which will directly impact my cost since pricing is memory driven. An algorithm may work best using a approach favoring memory (and minimizing CPU) when running on a single server but it may be more cost-effective when optimized for memory when scaled out over many servers.
  • How do new technologies like SSD impact my architecture? As the Netflix article illustrates, the cost impact can be radical, but it required substantial architecture redesign, not just a simple server replacement


In conclusion, moving from a hosted environment (where each server can be configured at will) to the cloud where servers come in pre-determined configurations requires not only an architecture review, but a sophisticated excel spreadsheet to compare the costs of various architectures. This upfront financial modeling is absolutely necessary in order to avoid unpleasant surprises as the business scales up.

Want to Predict your Cost in the Cloud? Roll Up Your Sleeves!


The selection of a cloud service provider is a critical decision for any a software service provider. Cost is, naturally, a key driver in this selection. However, predicting the cost of running servers in the cloud is a project in, and of, itself, because the only way to build a reliable model of costs, is to go ahead and deploy our systems with the service providers.


Why is not possible to forecast costs with pen and paper?

The main reason that pricing is so hard to forecast is that our system architecture in the cloud will likely be different from the one currently running in our own datacenter: the server configurations are different, the networking is different, and most likely we want to take advantage of the new features that come “for free” with a deployment in the cloud: higher availability, geographical redundancy, larger scale, etc. We’ll cover this in details in an upcoming post.


Another reason why it is hard to predict costs is that we don’t really know what we are getting:

When one considers the primary attributes of a server: RAM, CPU, storage, I/O (network bandwidth) – only RAM and storage capacity are guaranteed by cloud vendors. Vendors provide varying degrees of specificity about CPU and other key characteristics. Amazon defines EC2 Compute Units: “One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor”. Rackspace’s price sheet categorizes servers by available RAM and disk space (the more RAM, the more disk space). Their FAQ mentions the number of virtual cores each server receives, based on the amount of RAM allocated, but I could not find their definition of a virtual core. GoGrid, or Joyent provide similarly limited information.


As a side note, one needs to be aware that vendors typically refer to “virtual cores” – as opposed to real (physical) cores. A virtual core corresponds to one of the two hyperthreads that run on modern Intel processors since 2002. Conversely, a server with a quad-core Intel Xeon processor runs 8 virtual cores. You can read this 2009 post, plus the comments thread, for more specifics. While the data is dated, the observations are still relevant.


So, there is a lot that we don’t know about the servers on which we will run our system: CPU clock, size of LI, L2 RAM, I/O bus speed, disk spindle rotation speed, network card bandwidth, etc.

Furthermore, performance will vary across servers (since cloud vendors have a diverse park of servers of different age) and thus, each time a new image is deployed, it will land on a random server, with the same nominal specs (RAM, storage), but unknown other physical characteristics (CPU clock, I/O bandwidth, etc).


Another well-documented problem is that of noisy neighbors. While the hypervisors do a fairly good job at controlling allocation of CPU and memory, they are not as effective at controlling the multitude of other factors that affect performance. I/O in particular is very sensitive to contention. While VMware affirms that vSphere solves this problem, most (all ?) cloud vendors use open source hypervisors.

In any event, this problem is systemic and cannot be solved by the hypervisor. For example, we did a lot of research on the best configuration for our Cassandra servers (database for big data). One of the main performance optimizations driving Cassandra’s design is to maximize “append” (rather than update) operations, thus minimizing random movement of the read/write head of the disk, and thus maximizing disk I/O. Unfortunately, all this clever optimization goes out the window if we share the server – and thus the disks – with a noisy neighbor who is performing random read-write operations. I had the chance to discuss this a couple of months ago with members of the Cassandra team at Netflix (one of the largest users of Cassandra and almost 100% deployed on Amazon): they solve the issue by only using m2.4xlarge instances on AWS, which (today) ensures that they are the only tenant on the physical server – and don’t have any noisy neighbor.


Adding all this together makes it pretty clear that vendor comparison on paper is practically fruitless.

Let’s Try It Out

The only practical way to create a realistic budget forecast is to actually deploy systems on the selected cloud vendor(s) and “play” with them. Here are some areas to investigate and characterize, beyond simply validating functionality:

  • Optimal server configuration for each server role (web, database, search, middle tier, cache, etc). We need to make sure that each server role is adequately served by one of the configurations offered by the vendor. For example, very few offer servers with more than 64 GB of RAM
  • Performance at scale (since we only pay for the servers we rent, we can run full-scale performance tests for a few hours or days at relatively low cost – e.g. a few hundred dollars) – Netflix tested Cassandra performance, “Over a million writes per second”, on AWS for less than $600 and clusters as large as 288 nodes
  • End-to-end latency (measured from an end-user perspective) – since latency will be impacted by the physical distribution of the servers
  • Pricing model


For these tests to be meaningful, one needs to ensure that deployments are realistic: for example, across service zones and regions, if we plan on leveraging these capabilities – as they impact not only performance (due to increased network latency) but also pricing (data transfer charges).


In addition, each test must be run several (10 – 20) times – with fresh deployments – at different times of day – in order to have a representative sample of servers and neighbors.


As important as the technical performance validation, the pricing model must be validated as vendors charge for a variety of services in addition to the lease of the servers: most notably bandwidth for data transfers (e.g. across regions), but also optional services (e.g. AWS Monitoring or Auto-Scaling), as well as per operation fees (e.g. Elastic Block Store). The “per operation” fees can add up to very large amounts, if one is not careful. For example, see the Amazon SimpleDB price calculator – we have to run SimpleDB under real load in order to figure out what numbers to plug in. Overlooking this step can be costly.


Once the technical tests have been completed, and the system configuration validated,

I recommend at least a full billing cycle of simulated operations, in order to obtain an actual bill from the vendor from which we can build our pricing model.

Deploying to the Cloud? Hang on to your Trousers!

My team and I have spent the past months investigating a deployment to the Cloud with vendors such as Amazon, Rackspace, GoGrid … to name a few who provide Infrastructure As A Service (IaaS).

A few conclusions have surfaced:

  • One needs to clear about one’s motivations to migrate to the Cloud- different motivations will lead to different outcomes, for a given product
  • It is almost impossible to predict the cost of a cloud-hosted system – without deploying a test system with the selected vendor. As a corollary, precise comparison shopping is almost impossible.
  • It is almost impossible to design, let alone deploy, your system architecture – without prior hands-on experimentation with your selected vendor. Also, the optimal architecture once deployed in the Cloud is likely to be radically different than one deployed on your own servers.
  • Some Cloud vendors are moving aggressively up the value chain by offering innovative software technologies on top of their infrastructure. They are thus becoming PaaS (Platform As A Service) vendors. For example, as we commented in a previous post “Is Amazon After Oracle and Microsoft?” Amazon is deploying an array of software technologies – combined with services – that are tailored specifically for the Cloud, and are technically very advanced

We expand each of these points in upcoming posts, starting with the first one today.

The main arguments advanced in favor of a cloud infrastructure are:

  • Offload the system management responsibilities to the Cloud services provider:
    This is more than an economic trade-off: managing systems for high-volume Internet applications is a complex task requiring a broad set of technical skills – where said skills are in permanent evolution. Acquiring all these skills typically requires multiple engineers with varied backgrounds: computer hardware, operating systems, storage, networking, scripting, security, etc. These system administrators have been in high-demand for the past couple of years, demand high compensation, and usually want to work for companies which offer challenging work … namely those with a very large number of systems. As a result, some companies are simply unable to hire the necessary system administration talent in-house, and are forced to move to the Cloud for this single reason.
  • Leverage best practices established by Cloud vendors.
    Cloud services providers have optimized every aspect of running a datacenter. For example, Facebook released the Open Compute project in 2011 for Server and Data Center Technology. RackSpace launched the OpenStack initiative in late 2010, to standardize and share software for Compute (systems management, Storage, Media, Security, as well as Identity and Dashboard. Even managing systems at a hosting provider requires constant tuning of system management tools –  whereas a Cloud service provider will take on this burden
  • Benefit from the economies of scale that the Cloud vendors have created for themselves
    Building data centers, finding cheap sources of power, buying and racking computers, creating high-bandwidth links to the Internet, etc. are all activities whose cost drops with volume. However, to me, the impact of price is much smaller than that of pure skills. The aforementioned tasks are becoming more and more complex, to the point where only the largest companies are capable of investing enough to keep up with the state-of-the-art.
    In particular, Cloud vendors offer high-availability and recoverability “for free” – namely: free from a technical perspective, but not from a financial one.
  • Ability to rapidly scale systems up or down according to load
    This is one of the main theoretical benefits of the cloud. However, it requires a few architectural components to be in place:
    (a) the software architecture has to be truly scalable and free of bottlenecks. For example, traditional N-tier architectures were advertised to be scalable because web servers could be added easily. Unfortunately, the database rapidly becomes the throttling component as the load rises. Scaling up traditional database sub-systems, while maintaining high-availability , is both difficult and expensive.
    (b) Tools and algorithms are required to detect variations in load, and to provision/decommission the appropriate servers. This requires a good understanding of how each component of the system contributes to the performance of the whole system. The complexity increases when the performance of components does not behave linearly with load.
    (c) Data repositories are slow and expensive to migrate. For example, doubling the size of a Cassandra (noSQL database) cluster is time consuming, uses a lot of bandwidth (for which the vendor may charge) and creates load on the nodes in the cluster.
  • Ability to create/delete complete system instances (most useful to development and testing)
    The Cloud definitely meets this promise for the front-end and business logic layers, but if an instance requires a large amount of data to be populated, you must either pay the time & cost at each deployment or keep the data tier up at all times.  This being said, deploying complete instances in the Cloud is still a lot cheaper and faster than doing it in one’s data center, assuming it can be done at all.
  • The Cloud is cheaper:
    This is a simple proposition, with a complex answer. As we’ll examine in the next blog: figuring out pricing in the cloud is a lot more complex than adding the cost of servers.

Appreciating the business and technical drivers that motivate a migration to the Cloud will drive how we approach the next steps in the process: system architecture design, vendor selection, and pricing analysis. As always, different goals will lead to different outcomes.


Get every new post delivered to your Inbox.

Join 161 other followers