Time Tested Engineering Leadership Principles

I put together the first three of these four leadership principles during my first VP of Engineering gig, twenty years ago. Thirteen companies later, and having shared it with hundreds of engineers, I feel it is time to share the secret J

These leadership principles have been honed (a) for Engineers and (b) in the context of startups, typically with fewer than 150 employees. No claim is being made outside of these parameters.

1.   I commit to give you more responsibilities than you can handle … and help you succeed

The vast majority of Engineers are highly motivated (see my previous blog on “(Boosting) Morale in Engineering). They are motivated by their career, naturally, yet they are primarily driven by a need to accomplish and an intense desire to learn.

Another way of articulating this commitment is: “I am going to challenge you, and let you work as hard as you want, and exercise as many of your skills as possible”. Engineers hate being bored. On the contrary, they work extra hard when challenged. So my job is to continuously provide new challenges to each engineer in my team, and remove any impediments to their desire to fulfill these challenges.

2.   I commit to give you clarity, both strategic & tactical

I work hard to ensure that everyone knows where we, as a company and as an Engineering team, are going, what our objectives are (strategic), and how we plan to get there (tactical).

In practice, I make sure, during our periodic 1on1 that each engineer understands how his/her own project and role align with the company mission, and Engineering’s product roadmap.

Included in this commitment is a promise to each member of the Engineering team that on any given day, his/her #1 priority is clear. As logical consequence, this implies that each engineer only has one #1 priority (I have seen a lot of companies where this logic is violated). Their manager, or I as last resort, will handle situations where, for example, 3 VPs are breathing down an engineer’s neck, each with their own “top priority”.

Having everyone in the team understand and share the same strategic context empowers developers to make correct micro-decisions every day. As a side benefit, this frees me and their managers to work on bigger problem.

 

Taking a step back, if I’ve communicated correctly my commitments 1 and 2, then everyone in the team is working at the maximum of their ability – and – all are working in the same direction. This is a good foundation for solid productivity.

Having made two commitments to everyone in the team, I ask for two in return.

3.   In return, I demand teamwork & 3-D communications

I put teamwork and communications in the same sentence because one is meaningless without the other. Teamwork can’t exist without meaningful communications, and if we communicate but don’t work together, we don’t go very far.

No interview question will ever suss out whether a candidate is a team player or not. Instead, I explicitly declare that they should not join my team if they are not a team player.

Team work is important because product development is a team effort. Every engineer interacts with product managers, UX designers, front-end engineers, middle-tier, backend, data, QA, tech support, etc. Poor interactions with other team members results in poor individual efficiency.

Teamwork means that “together, we succeed”. Teamwork is not merely about helping out a teammate who needs help. More importantly, being a team player means asking for help when we need it, so as not to delay the whole team.

3-D communications simply expands the definition of “team” beyond one’s daily scrum. We are all inter-dependent, and we each must ensure that information gets to the people who need it, no matter where their name sits in the org chart. Making sure information is received in a timely fashion, rather than waiting for questions to be asked, is incumbent upon each of us.

In particular, this means that everyone on my team has the responsibility to inform me if I am not meeting commitments #1 and #2 stated above. I don’t read minds, and I can only take corrective actions if someone lets me know that they are bored, confused, pulled in too many directions, or under-utilized, etc.

4.   At the end of the day, we need to be proud of our work

I added this fourth principle, a few years later. I had been working at a company for about a year, had delivered a handful of successful releases, yet sensed burn-out and loss of creativity in the team.

A startup demands almost contradictory qualities from its Engineering team: speed and creativity (quality is a given). Because the demand on speed is often explicit, while the demand on creativity is often implicit, it is easy to fall into the trap of focusing only on execution at the detriment of innovation, or even the beauty of the code.

Yet, if we continuously succumb to the mantra of “ship, ship, ship”, and give up trying to build something cool, then we start on a slippery downward slope towards creating “blah” products. There are always pressures to ship more features faster, but if each of us is not proud of the product we are releasing to our customers then our customers won’t be excited about the product, and we won’t be having fun at work. Life is too short for us to accept either of these issues.

Making It All Work

There is nothing new, or magic, about these four leadership practices. The magic is in their daily practice. They work for me because I force myself to apply them on a daily basis, and I remind my teammates of their existence, their rationale and their own commitments, whether when welcoming a new member, during a 1on1, during my weekly staff meetings, at exec staff, or monthly Engineering updates, or even at the water cooler.

DevOps-Driven Development

It is now time to add the concept of “DevOps-Driven Development” to our repertoire.

“Test-driven” development, which originated around the same time as Extreme Programming and Agile Development, encourages us to think about testing as we architect our software and plan our tasks. Similarly, a “DevOps-Driven Development” approach, ensures that we consider operational implementation as well as deployment process during the design phase. To be clear, DevOps thinking needs to augment (and not replace) testing strategy.

Definition and Motivation

First a definition: I am using the word DevOps here as a shortcut to include both DevOps (build and deployment tools) and Ops (IT/data center Operations).

How many times have you heard “ … but it works on my machine!!” from a developer whose code was found to have a bug in the QA environment or, worse, in production? We all agree that these situations are a horrible waste of time for all involved, most of all customers. This post  thus advocates that DevOps-thinking, just as quality-thinking, must occur at the design phase and continue throughout the development of the software until the software is released to production, and even after it has been released in production.

Practicing DevOps-Driven Development

I have always advocated: “If you don’t know how to test it, you don’t know how to design it.” (Who Owns Quality? Part 3), to articulate the fact that “quality cannot be debugged out, it has to be designed in”. Similarly, if we want to know – before our customers call us – when our code crashes in Production, or becomes unusably slow, then we must build into our code the proper instrumentation and administration capabilities.

We now must add this mantra “If you don’t know how to deploy it and manage it in Production, you don’t know how to design it”.

Just like we don’t allow code to be merged into Trunk (main branch) without complete unit tests, code cannot be merged into Trunk without correct deployment scripts, release notes, and production instrumentation.

Here is a “thinking DevOps” check list:

Deployable

First of all, we must ensure that the code deploys successfully not only in Production but in all environments: Dev, QA, Stage, etc

This implies:

  • Developers write/update release notes: e.g. highlighting any changes required in the configuration of the environments: open new port, add a column in database, a new property in config files, etc
  • Developers in collaboration with DevOps team update deployment scripts, e.g. to account for a new executable, or schema changes in the database

The management of Config/Property files is beyond the scope of this blog, but I strongly recommend the “Infrastructure as code” approach: i.e. fully automating  server/image configuration for deployment and, managing configuration, deployment scripts and application property files under source code control.

Monitor-able

If we want to detect problems before our (irate) customers call us, our code needs to be monitor-able – not only at the physical server level, but also each virtual machine, service and process, as well as networking and storage systems.

Monitor-ability needs surpass keeping track of CPU load, disk space and network bandwidth. We, developers, (should) know what parameter(s) indicate when our system is mis-behaving, whether it is a queue exceeding a given size, or certain operations timing out. As a consequence, we must publish these parameters to interfaces compatible with Ops monitoring tools, of which there are several categories:

Furthermore, by making performance metrics easily observable, we ensure that each new release maintains (or improves) the performance of the prior release.

Diagnosable

Despite our best intentions, we must humbly assume that at some point our code will crash, or seriously mis-behave, and thus require troubleshooting. In the worst case, Development will be called in (usually in the wee hours of the night) to assist the Ops team. As any one who has had to figure out why a given system intermittently crashes will attest, having log files capture meaningful information prior to the incident is invaluable. Having to add logging statements after-the-fact is a painful process. Consequently, a solid Logging Hygiene is critical (and worthy of a dedicated post):

  • Log statements must be written in a format compatible with the log management system (Splunk, GrayLog2, …)
  • All log statements used during the coding and QA phase must be removed
  • Comprehensive Operations-focused logging must be added to document all operations that may fail due to environmental and data-related problems: out-of-memory, disk full, time out, user not found, access denied, etc. These are not bugs, but failures due to either environment (e.g. a server or connection is down) or incorrect data (e.g. the user has been deleted).
  • The hierarchy of logging levels must be enforced so that in normal operations log files are kept small, and conversely  meaningful information is output when troubleshooting is required
  • Log statements must include all the information necessary to bind all operations across various services that are related to a single user-level transaction (e.g. clicking on a link to a new page, adding an item to cart) – more details below in “Tunable”.

Security

This again is worthy of its own post, but code that is deployed to Production must both support the security practices implemented by the Ops team (e.g. Authentication protocols, networking infrastructure), and ensure that the code itself is secure (e.g. no SQL injection, buffer overflow, etc).

Business Continuity

Business continuity is often overlooked, but we must ensure that any persistent data is stored in a storage system that is backed up by the Ops team. In other words, if we add a new database, we’d better ask the Ops team to add it to their backup scripts.

Similarly, if our infrastructure is deployed (or even just deployable) across multiple data-centers, our code must support this though configuration.

The above requirements represent the basic DevOps requirements that any developer must address before even thinking that his/her code is ready to release. The following details additional practices that are highly recommended, but not strictly necessary.

Scalable

The code must be designed so that the Ops team can scale it in the datacenter without needing help from Development.

This may involve deploying the code to a bigger server. This implies that the code can be configured (and documented for the Ops team) to make use of the expanded resources, whether it is number of cores, RAM, threads, I/O, etc

This may also involve adding instances to a cluster. Consequently, the code must be discoverable (the load balancer must find out that a new instance has been added/subtracted), as well as cluster-aware (e.g. stateless).

Tunable

Because it is so hard to simulate all real-life user activities and behaviors in non-production environments, we must provide tools to the Ops team to tune the performance of our code through configuration rather than code deployment (e.g. size of JVM, number of threads, queue sizes, hash table size, etc).

We must thus provide the metrics to observe performance. Let’s take the example of response time: depending on the complexity of the application a user request may be handled by tens, or even hundreds of services. In order to allow the Ops team to build a timeline of the interactions between all the services involved, each log entry must carry at least one tag that identifies the root transaction that generated the request. Otherwise it is impossible to determine whether the performance degradation comes from a given service, or a unique server, or even from the network infrastructure.

The same tagging will be used to troubleshoot failures (e.g. to discover why a given service fails intermittently).

QA-able

As I mentioned in an earlier blog, QA does not stop in QA: we have to anticipate “unknown unknowns”, i.e. usage (or performance) scenarios that we have not modeled in our QA environments. By definition, there is not much we can do other than ensuring that our code is easy to trouble-shoot (see above) and that logs and associated data can be made available easily and rapidly to developers and QA team (e.g. by giving them access to the log management console).

Sometimes this requirement is more complex than it sounds, e.g. when user data must be deleted or obfuscated for privacy or security reasons. Again, this should be thought through before code is deployed.

Analytics – Growth Hacking – Usability

This last requirement stems from Marketing and Sales rather than Operations, but it is equally important since it drives revenue growth.

In most companies, marketing and sales rely on usage reports to drive new marketing campaigns, pricing, product offerings and even new features. As a consequence, any new feature must integrate with the Analytics infrastructure whether via integration with usage tracking applications (e.g. Mixpanel, Flurry, …) or simply log management consoles (Splunk, GrayLog2, …). However, I highly recommend using separate logging infrastructure for operations monitoring and for usage analytics, if only because usage analytics requires additional data that is not useful for Operations monitoring (e.g. the time a user spends on a page is extremely valuable for usage analytics but irrelevant for Operations)

Even More So for Microservices

As we migrate towards a microservices architecture, early “DevOps thinking” becomes even more critical. As the “Microservices: Four Essential Checklists when Getting Started” advises: “Microservices introduces a lot of moving parts that were previously non-existent in a monolithic system”.

What was a monolithic application running in a single virtual machine can morph into 5, 10 or even 20 microservices. Consequently, Development, DevOps and Ops must collaborate on microservices infrastructure tools: service registration, scaling up/down each service independently, health monitoring, error detection, etc. to provide visibility on the status of these 20 microservices as a whole. This challenge has even prompted dedicated product categories (SignalFx,  Nirmata, etc)

Summary

Only with a holistic approach to product architecture can we ensure customer satisfaction with software that works the first time, and all the time. Deployment and operations management concerns, just like testability, must be addressed at design time, so that these capabilities are meshed natively into the code rather than “bolted on” after the fact. Failing to do so will likely impact the delivery schedule, or worse, create outages in production.

More importantly, there is so much we can learn from observing how our code behaves in Production: operational efficiency, stability, performance, usability, that we would do a disservice to ourselves if we did not avail ourselves of this valuable information to drive further improvements to our product.

(Boosting) Morale in Engineering

The recent article by  Jessica McKellar titled “This Is What Impactful Engineering Leadership Looks Like”, and the question “Any suggestions on how to inspire my team?” published on Everwise, prompted me to reflect on what impacts morale in Engineering teams.

At the risk of appearing to deflect my responsibilities as a VP of Engineering, I will assert that morale in Engineering is driven primarily by company culture. Consequently, in order to boost morale, my first priority is to focus outwards and educate the company leadership on how to create a culture that fosters productivity in Engineering.

In my experience, engineers, like most people, are motivated by a sense of purpose and accomplishment. Unrealistic deadlines imposed by the business teams, or constantly changing priorities, for example, will sap the moral of any team, no matter how capable, or charismatic, its leader.

Consequently, the answer to “How do you motivate your team?” is that I first eliminate everything that demotivates them – which is at least half the battle. Then I make sure that we employ the proper tools and methodologies, so that we are efficient collectively as well as individually. Only on rare occasions, do I metaphorically stand on a soap box and deliver a rousing motivational speech”.

ENGINEERS ARE SELF MOTIVATED

Does anyone really think that a professional football player needs a motivational speech before stepping on the field on Sunday? Heck no! He’s been waiting for that moment all week, all year! The rah-rah speech from the coaches or team captains that ESPN shows us, is just for the cameras. Said another way, if a player needs this pre-game sideline speech in order to go all out on the field, then he’s in the wrong business, and I certainly wouldn’t keep him on my team.

Well, it’s the same for Engineers.

“FIRST DO NO HARM” – ADDRESS THE COMPANY CULTURE

This list of “morale killers” will appear to be self-evident. Yet, I see these mistakes perpetuated over and over.

  • Imposing unrealistically aggressive schedules for releases – whether on purpose or not
  • Frequent (i.e. more than every 6 months) changes to the corporate strategy that nullify the existing product roadmap
  • Asking the Engineering team for an extra-ordinary effort to deliver a feature to win a major deal, only to fail to win the deal … more than a couple of times
  • Excluding engineers from customer meetings
  • Failing to publicly recognize accomplishments – whether collective or individual

One of the most counter-productive pattern is to purposely impose an unrealistic deadline based on the illusion that it will motivate engineers to work harder than they normally do. This pattern is ill informed for the following reasons:

  • Engineers may work longer hours when required, but it is unlikely that they will produce their best work during these long hours. It could even be counter-productive if a higher proportion of bugs is introduced.
  • Sustained long hours do not foster creativity, nor attention to details
  • The most aggressive schedule is accomplished by setting a realistically aggressive schedule at the onset. Just like a sprinter has to set progressively aggressive times as the season progresses, each release schedule has to be aggressive, yet achievable.
  • Unrealistic deadlines are rarely met. As a consequence, even if the team delivers an amazing product in an incredibly short amount of time, on release day, we all feel like we failed (since we did not meet the crazy deadline). It is hard to build on top of failures.
  • On the contrary, by setting realistic deadlines, and ensuring that we hit them, we build confidence in ourselves. Furthermore, our internal partners (e.g. Marketing, Sales), as well as our customers also start trusting us and our dates. Success beckons success.

PROVIDE THE PROPER ENVIRONMENT

Not only are Engineers driven by success, we also care about the products we build. We want to ship products on time, we want our users to be thrilled by the product and we want the company to grow. Consequently, my only job is to remove all impediments to these fundamental motivations. I thus focus on:

  • Providing clear strategy and tactics
    • Why are we doing what we are doing (vision, product roadmap, business context) as well as what are our immediate priorities.
    • Ensure that each team member has 1 –  and only 1 – top priority
  • Expecting, and nurturing, a culture of results and forward-looking attitude. Focus on the challenge at hand, rather than laying blame.
  • Making post-mortem reviews actionable: by deciding what we will do differently, and better, next time (rather than on an exhaustive list of things we did wrong) – and following up to ensure that we do do things differently the next time around
  • Making it “our team” rather than “my team” – by encouraging collaboration and ideation from everyone, particularly when it comes to development methodology. Adoption of best practices will be all the easier that recommendations come from peers.
  • Making it easier, simpler to ship products by creating product-focused teams, and limiting meetings to those that are determined useful by the team
  • Stimulating productivity by encouraging maximum use of tools and automation … and minimum number of meetings
  • Fostering team work by encouraging, even requiring, open and timely communications (good & bad news alike). Emphasize empathic cross-team communications (e.g. “be aware that the changes I had to make to the API have subtle implications for your component …”)

ALSO NURTURE THE INDIVIDUAL

In addition to removing impediments to productivity, and providing the right tools and environment for the Engineering team at large, one, naturally needs to address each individual’s motivations

  • Clarity of role: it must be made obvious to each engineer how their contribution feeds the success of the Engineering team and the company – both tactically and strategically
  • “Personalization”: understanding what drives each person in the team (technical, managerial challenges), how they prefer to communicate, their work style, etc
  • Responsibilities: ensure that everyone in the team is challenged to the best of their abilities (to the extent possible given the needs of the organization)
  • Personal rapport: team spirit is built from common aspirations, but also from one-to-one personal relationships, including with the VP of Engineering

Morale is a complex feeling that’s is not easy nurture in a team. It is much easier to destroy it, than to boost it. By removing the “morale killers” – typically originating from the company culture, one can bring a team to a level of enjoyment and productivity where only a little more effort brings a virtuous circle of improvement, when team members themselves drive further improvements.

Sprint 0 “vs” Agile

Members of my teams usually look at me funny when I state at the start of a project that we need to plan. The boldest ones may even venture: “We’re doing Agile, so we don’t need to plan”, implying that planning is synonymous to waterfall, and that planning is certainly incompatible, if not contrary to, Agile.

This is a mis-guided debate. It matters not whether planning is Agile, what really matters is whether it is a good Engineering practice, and, secondarily whether it can be blended with an Agile methodology.

WHEN TO PLAN

The need to plan arises whenever a complex set of features needs to be developed. Typically complexity arises because this new project is dissimilar to anything we have done before, the scope is large and/or we are dealing with “new stuff” (architecture, software framework, tool, people, performance, etc.)

The name Sprint 0

“Sprint 0” designates this planning phase … because the planning takes place before Sprint 1

However, it is partially a misnomer because it is not really a Sprint: it is not structured as a Sprint (the team may not have been formed yet), and its duration is not the typical sprint duration (it takes as long as it takes).

Analogy: Let’s go Hiking

Let’s say we’re going on a 5-day hiking trip in the wilderness. Before the hike, we will look at the map of the area, and profile of our hike (e.g. identify how much elevation we’ll need to climb) so as to distribute our daily efforts evenly across the 5 days [rough scoping]. In particular, we will identify places where to get water, and places to sleep each of the 4 nights, both really important [risk areas]. In addition, I’ll coordinate with my fellow hikers who is bringing the tents, who is buying/carrying what food, etc [roles & responsibilities]. Finally, I’ll copy the schedule of the park shuttle that will bring us back to where we parked the cars [overall schedule].

This is the equivalent of Sprint 0.

Planning is NOT Synonymous to Waterfall

The fundamental difference between a Sprint 0 plan and a Waterfall plan is that Sprint 0 plans JUST ENOUGH to eliminate risk, versus preparing a complete design and exhaustive task schedule.

Sprint 0 wants to eliminate surprises, such as unnecessary refactoring (e.g. because the UI team and the mid-tier teams have a different vision on how to build the API). The fact that both participate in the same daily scrum does not necessarily expose these differences.

The purpose of Sprint 0 is to, almost literally, identify “the lay of the land”: key features, roles, and major risk factors.

This plan leaves plenty of room to be Agile: going back to the hiking analogy: on day 2, we can decide that we’ll walk extra hard on day 3, so that we can stay 2 days at campsite 4 which is on the shores of a beautiful lake.  We could even decide to extend the trip by 1 day … as long as we ration our food accordingly.

The plan does not dictate at what time we get up, who cooks on what day, or what activities we’ll do on our “relax day” fishing, swimming, playing Frisbee, .… But the plan highlights a “relax day”, and thus the need to bring Frisbee, or fishing rods.

In addition, the plan sets yardsticks along the way so that we can measure our progress against our overall objective. For example, we’d better make sure that by day 3, we are past the mid-way point, if we want to finish our trip on day 5.

WHAT TO PLAN

The first activity of Sprint 0 is to review the main deliverables, both external (features) and internal (deploying a new framework, technical debt, performance). We also want to identify risks that could impact the technical solution or the schedule. It could be as trivial as having to finish a set of user stories before the key team member goes on vacation, or as complex as demonstrating that adding a cache does increase performance 10x.

We plan to a level of detail that gives us confidence that our design approach is solid and our schedule is realistic. How realistic depends on the needs of the business. Some companies commit to releases at a given date, others are fully agile.

Sprint 0 Deliverables

  • In InfoQ’s article: What is Sprint Zero? Why was it Introduced?one of the contributors: Mark Woyna, “uses Iteration Zero as a spike”:
  • “The planning team is responsible for producing 3 deliverables by the end of the planning iteration:
  • A list of all prioritized features/stories with estimates
  • A release plan that assigns each feature/story to an iteration/sprint
  • A high-level application architecture, i.e. how the features will likely be implemented”

To which, I add:

  • Design documentation relevant to the project: e.g. Interaction diagrams, Entity/Object definitions, APIs
  • A list of risks to monitor during the project: e.g. dependencies on external factors, critical results (e.g. validation of a new framework, or performance metric)
  • Detailed user stories for Sprint 1 – so that we can start Sprint 1 in earnest at the end of Sprint 0

Plan to a Judicious Level of Details

Contrary to Waterfall practices, we don’t make all the decisions during Sprint 0, we make the minimum number of decisions necessary to “eliminate risk”.

Obviously, risk is only completely eliminated when the project is complete, but in most projects there are some critical decisions that reduce risk significantly. For example, writing out the interaction diagrams for the major use case. This exposes the core assumptions about the main objects in the system and their responsibilities, clarifies whether interactions are synchronous or async, what message broker we use, etc. The whole point is that hashing out disagreements over a diagram is a lot more efficient, and less costly, that doing it once code has already been written.

ARGUMENTS AGAINST SPRINT 0

It’s Waterfallish

Scrum Methodology states “the common dysfunction called “Sprint Zero” is actually a contradiction in terms. Companies (and misinformed consultants and trainers) use this as a way to avoid changing waterfall habits.”

This argument totally misses the point – which is to have sound Engineering practices. Slapping a “Agile vs Waterfall litmus test” does not inform the discussion as to whether this particular practice is sound engineering.

Does Not Deliver Value to the Customer

The article from Scrum Alliance: What is Sprint Zero? presents “Scrum believes that every sprint should deliver potentially usable value (… by the customer)”.

The fact that Sprint 0 does not deliver value to the customer at the end of the Sprint is a myopic argument, which misses the more important benefit of Sprint 0, namely, that it improves the velocity of the team for all the other Sprints. Because we lay out a high-level path to success in Sprint 0, we walk a straighter, and faster line, during the remainder of the Sprints. Equally important, we avoid “critical failures”: where a significant portion of the code needs to be refactored because we incremented our way into a design rather than taking the time to think it through.

Another way of saying this is that Sprint 0 brings value to us, the team, by providing better visibility to the whole project. We “return” this value to the customer, by being more efficient and faster overall.

CONCLUSION

When thinking about Engineering best practices, let us not corner ourselves into debating labels, e.g. Agile vs Waterfall. To me it simply makes sense to take time to reflect, think and plan before embarking on a complex project to:

  • Evaluate key features to be implemented
  • Agree on key design and architecture designs, such as entities, APIs, protocols
  • Identify risk areas: schedule, resources, technology, performance
  • Map out tasks over time (a) to ensure that the project will be completed in a time frame commensurate with needs of the business and (b) set up yardsticks against which to calibrate our progress in the future
  • Lay groundwork for Sprint 1.

Basics of Performance Testing

At the risk of sounding harsh, I cannot remember the last time I interviewed a QA engineer who could clearly explain to me  what a Performance Test is. Even more worrisome, when they described how they usually run performance test, they could only describe the mechanics of the test (“I ran JMeter”). More often than not, the tests were wrong, and, in any case, they could not interpret the results (largely because they did not know what numbers to look at).

So here is a basic description of what a performance test is and the important steps to follow – as well as one of the most common mistakes.

Note for illustration purposes, I’ll use the term site (e.g. retail e-commerce site), but this post applies identically to SaaS services.

Typical Performance Test

The primary performance test that engineers want to run on a product is: “How many users will our product support?”. This typically translates to “How many concurrent users can the current code and infrastructure support with acceptable response time?”

Another performance question is to determine how many total users can be supported. This problem is rarer, and we won’t cover it here.

The first challenge is: “what is a concurrent user?” While the answer is intuitively obvious: “the number of people using the site at the same time”, we still need to define what “at the same time” means. In particular, it does NOT mean “exactly” at the same time, i.e. within the same micro-second.

This incorrect “within the same microsecond” interpretation actually leads to the most common error in performance testing, where testers set up N JMeter clients (where N is the target number of concurrent users), launch them at the same time, and measure the time it takes until the last clients receives a response. This test does not measure concurrent users – in real life, users arrive randomly on a site, not at the exact same time. This is illustrated in the two figures below. Both represent  1,000 users using the site during a period of one hour. The “Burst” figure illustrates the “bad” test when all 1,000 users hit the same in the first minute (and then no activity in the remaining 59 minutes). While the correct scenario is likely to be closer to the second picture, where during each of the 60 minutes an average of 16.66 users hit the site (1,000 users / 60 minutes). However, in any given minute the number of users can vary between 5 and 25 (for example).

random_vs_burst

Another way of expressing that a site has 1,000 visitors  per hour, is that a new visitor hits the site every 3.6 sec (3,600 seconds per hour / 1,000). Consequently, we can program JMeter to hit the site following a pseudo-random sequence of mean 3.6 sec and standard deviation 1 second (e.g.).

Depending on the length of our standard visitor visit, we will need to deploy multiple clients. For example, if each visit simulation takes a minute, we will need an average of 16.6 clients (60 sec per minute / 3.6 sec) to perform the simulation

How Many Concurrent Users to Use?

The excellent article: How do you find the peak concurrent users on your site? describes in details how we can find the peak number of visits during a given hour on the site (for performance testing, we care about visits – rather than visitors)

Of course, the key is to know our site well enough to identify the actual day(s) and hour(s) when we have the peak traffic.

However, we cannot stop there. The purpose of the test is to give us confidence that our site will be able to handle the load in the future. So the historic peak number of visits needs to be interpolated for the anticipated lifetime of the release we are certifying. E.g. if we want to be ready for the upcoming Black Monday, we need to take last year’s numbers and increase them based on the projected traffic from Sales & Marketing … plus an extra 25% as a safety buffer.

What is a Visit? What User Actions to Execute?

Now that we have figured out, how many visits we need to run, and how often, we need to figure out “what’s a visit”? There are 2 principal ways to answer the question:

  • a visit is a isolated user action
  • a visit is the simulation of an actual visit of a typical user.

Testing the Performance of a Isolated User Action

While we assume, and hope, that the visitors on our site don’t limit themselves to a single action when they visit, we may still want to simulate the performance of a single action under 2 generic conditions

  • this is new functionality and we want to ensure that it is fast enough
  • as part of performance regression testing – we want to ensure that the new release is no worse than the past releases

The question now is, what isolated user actions should we test? Here are some examples:

  • Login / Home Page / Landing Page: the 1st page that is loaded when a  user arrives on the site. It is important for it to be fast, since it is the “first impression” of the site for the user – and – it usually requires a lot of work server-side, since, by definition, we have no (or little – in the case of a returning user) cached data, so a lot of data needs to be computed or refreshed
  • All pages related to checkout and payment: don’t want to lose a purchase because the site is too slow
  • Pages which Analytics in Production identify as slow
  • New Pages / heavily re-factored pages that are content- and/or compute-heavy

Testing the Performance of a Typical Visit

A “typical visit” can be constructed either “synthetically”, or “by replay”.

A synthetic visit is one that we build based on the nature of the site and statistics gleaned from the Production system – such as the average number of pages visited. A typical visit for a retail site would entail: login – select a category – then a sub-category – browse a couple pages – select an item – checkout and pay.

A “replay” visit is based on tracking the actual navigation of a random user on the site (e.g. using log files).

It is important to note that I should be using the plural – we need to identify multiple “typical visits”. For example, in the example above, a second typical visit would involve a couple of searches rather than browsing by category/sub-categoy.

We can also breakdown our visits to focus on specific sections or functionality of the site: search, browsing, check-out – this makes it easier to interpret the results

Let’s Not Forget Background Load!

In order for our performance test to be realistic and representative of Production, we need to simulate all the types of traffic taking place on the site – qualitatively and quantitatively – during our experiment – because each type of traffic consumes shared resources differently – whether it’s cache, database resources, access to disk, etc.

What I call “background load”, is a simulation of the typical traffic on our site. The best way to simulate it is to record it and replay it – either using a proxy server, log files, or other tools – e.g. Improving testing by using real traffic from production

One exception: If we want to characterize the performance of a specific code path – and/or benchmark it against previous versions – then we should not run any background load

How Long Should We Run the Test?

When thinking about the timing of a performance test, we first need to address the “ramp-up” or “warm-up” phase. When we launch our performance test, our system also starts “cold”: typically it has an empty cache, a small number of threads running, a few database connections set up, etc – so to obtain a realistic measurement of performance we need to wait for the system to reach its steady-state. It’s not always straightforward to tell when the system reaches steady-state, but we can get clues from monitoring critical system parameters: CPU, RAM, I/O on key servers (e.g database). Also, response time should stabilize.

Secondly, because the simulated visitors access the system in a random fashion, we need to take our measurements over an extended period of time, so as to smooth out the randomness. An easy way to figure out the exact duration is to experiment: if the results we get after 10 minutes are the same – over 10 experiments – as those that we get over 1 hour – then 10 minutes is enough.

How Many Tests Do We Run?

Depending on the test, and the environment,  we may have to run it multiple times. For example, if we are running on AWS, even if we are running on reserved instances, the performance may vary between runs, depending on network traffic, load from other tenants of the physical servers, etc.

Interpreting the Results

We can look at the results in at least 2 different ways:

1/ In absolute: does the response time meet our target?

2/ Relatively to prior releases: is our response time no worse in this release than in prior releases

Performance Hygiene

Ideally, performance tests are automated and run regularly – at least after each Sprint. This allows us to catch performance regressions early.

Conversely, running performance tests after “code complete” is almost a waste of time, since this leaves us no time to remedy any serious performance issue, and thus puts us in a quandary: release a slow product on time vs release a good product late?

Common Mistake: Initial Conditions

Finally, we need to address a very common mistake, namely ignoring initial conditions.

To ground the discussion, let me give an example: we can all agree that the query “select * from TableA” will execute much faster if TableA has 1 row vs if TableA has 100M rows.

The same applies to performance testing. Just as we let the system reach steady-state before we start measuring performance, we also need to ensure that all assets impacting performance are fully loaded.

To be more specific, let’s say today is March 15, and I am working on the release that will be in Production for Black Friday in November. In order to have a meaningful performance test, I need to make sure I load my E-commerce database not just with the same number of customers that we have today (March 15), but with the projected number of customers we’ll have by the end of November! Similarly, with the number of products in the catalog, documents in the search database, etc.

This is a complex – but absolutely critical – step. Otherwise, our tests will tell us about the past, not the future.

The final consideration is to ensure that the initial conditions – as well as the test – are 100% reproducible. This means that the “initial conditions” need to be exactly reproducible: e.g. by restoring from a previously archived database, by using the same pseudo-random sequence to trigger visits, the same logs to replay background traffic, etc. Otherwise, we are simply running a different test each time. As a consequence, any benchmarking would be meaningless.

Summary

Performance testing is complex, and requires a lot of thought, careful planning and detailed work to produce results that are meaningful. Specifically, we need to:

  • Model one or more individual visit profiles constructed from traffic patterns on our site / in our service
  • Model visit rate based on our site’s Analytics and interpolate it to projected level during the lifetime of the release
  • Generate pseudo-random sequences to model users’ arrival on the site
  • Generate a model of background load and/or a mixture of individual visits that together are a good approximation of actual traffic
  • Make sure to give the site/service time to warm-up (i.e. reach steady state) before starting measurements, and run the test long enough to smooth out the pseudo-random patterns. Also run multiple tests if environmental conditions cannot be fully controlled
  • Finally, make sure to properly initialize the whole system – in a reproducible fashion – so as to account for all the data already present in the system.
  • Finally finally, ensure that all tests conditions are reproducible, and tests are automated so that they can be run regularly – preferably upon the completion of each sprint. This ensures that performance bugs are caught early, rather than 2 weeks before the target release date.

How to Prioritize New Features vs Bug Fixes

The most lively debates that I regularly encounter leading an Engineering team revolves around the allocation of resources between bug fixing and the development of new features: “Why doesn’t Engineering fix all the bugs?” exclaims a customer support person – “Why don’t we allocate all Engineering resources to New_Shiny_Feature_X?” wonders the salesperson whose major deal depends on this feature.

These are both absolutely legitimate questions! … It does not mean that their answer is easy.

The main challenge in satisfying these two rightful requests is that they compete for the same resources, and that different people within the company have strongly-held different perspectives. The same person can even switch camps in a matter of days. It all depends on the last sales call. Do we have a customer threatening not to renew until we fix “their bugs”, or do have a big deal pending on the delivery of a new feature?

As a consequence, it is imperative to create a business and technical framework that leads to decision making, where every stakeholder can not only express their perspective but also be satisfied about the decision process and thus about the decisions that come out of this process.

Framework for Decision Making

What’s more important? Or more precisely, what’s more important to implement in this release cycle?

  • New features driven by product roadmap and corporate strategy
  • Customer-driven enhancement requests
  • Bug fixes requested by existing customers
  • Paying down technical debt: upgrade architecture, refactor ugly code, optimize operational infrastructure, etc

The process to reach a decision is basically the same as for any business decision: we weigh how much income each item will generate and how much investment it will require.

Implementing a new feature, fixing a bug, enhancing a released feature or paying down technical debt demand the same activities: define requirements, design, code, test, deploy. They also all draw from the same pool of product managers, developers, QA and DevOps engineers. As a consequence, it is relatively easy to define the “investment” side of the equation.

Estimating the income side is a bit more complex, because it comes in multiple flavors. However, the process is the same as prioritizing the backlog of new features: we need to articulate the business case:

  • Expected revenue stream (new features & enhancements)
  • Reduction in subscription churn (enhancements & bug fixes, as well as new features)
  • Cost reduction (technical debt / architecture) through increased future development velocity
  • Customer satisfaction (bug & enhancements) which translates in better advocacy for the brand and churn reduction
  • Strategic objectives (market positioning, competitive move, commitment to win a major deal)

Each of these categories is important in its own right. Since they cannot all be translated into a common unit of measure (e.g. dollars), I recommend quantifying each of these elements relative to one another (e.g. using T-shirt sizes: S, M, L, XL, …) for each item on the list.

Practically, I create a matrix with rows listing each feature, bug, enhancement request, technical debt, and the following columns:

  • Short Description
  • Link to longer description (Jira, Wiki, …)
  • Summary business case
  • Estimated engineering effort
  • Estimated calendar duration
  • Expected increase in revenue (if any)
  • Expected cost reduction (if any)
  • Customer satisfaction impact
  • Strategic value

While this is not perfect – ideally we’d want to assign a single score for each item – this allows to (a) resolve the no-brainers (high-benefits at a low-cost or high-cost and low-benefits) (b) frame the discussion for the remainder against the business context of the company:

  • Are we in a tight competitive race where we need to show momentum in our innovation?
  • Do we have one, or more, major deals dependent on a given set of features?
  • Are our customers grumbling about our product quality, or worse threatening to leave?
  • Is our scalability at risk because of legacy code?
  • Are we being hampered in our ability to deliver new features by too much legacy code?

While this will not eliminate passionate debates at Product Council, it will hopefully bound them, particularly if we can first agree on high-level priorities for the business.

Why Not Have a Dedicated Sustaining Engineering Team?

There are two primary reasons why a Sustaining Engineering team is a bad idea: first, it “does not answer the question” of prioritization, and secondly, it is a bad practice as it creates a class of “second-class citizens” engineers.

Say you want to have a Sustaining Engineering team. How large should it be? 5%, 10%, 20%, 50% of all engineering? Why? Should its size remain constant? Or are we allowed to shift resources in and out depending on business priorities? Answering these questions requires the same analysis and decision making as I propose above, but is burdened by the inflexibility of a split organizations

Regardless of whom you assign to Sustaining Engineering, these engineers will be considered second-class by the self-proclaimed hotshots who get to work on new features. Worse, it promotes laziness with respect to quality from the “new feature team”: they know that Sustaining will clean whatever mess they leave. It is pervasive, and over time can even lead to cherry-picking of work, which means that Sustaining ends up completing the “new feature” work. For example, the “new feature” team releases a new product on Chrome (so that they can meet “their date”), but Sustaining gets to make it work on Internet Explorer.

A Useful Best Practice

Any bug older than 12 months, should be removed from the bug backlog. They should either be marked as “Won’t Fix”, or assigned to a secondary backlog list (which, I predict, will never be reviewed). The justification is simple: if a given bug has lived through a year’s worth of bug triages without rising to the top and being fixed, then it is almost certain that it will never be prioritized for resolution. Better to put it out of its misery. Furthermore, this will keep the bug backlog to a reasonable size and bug triage a manageable task. Finally, if for some reason, the visibility of this bug raises anew, it can be returned to the active backlog.

Conclusion

The adage “Software always has bugs” remains true, not because it is impossible to write perfect software (I argue that this IS possible), but rather because in a business context, quality is not an end in-and-of-itself. Don’t get me wrong high-quality is critical, but fixing ALL the bugs is not a requirement for business success.

As a consequence, only 1 criterion matters: “what moves the business forward the most effectively?”

Typically this means making customers happy. There are times when customers are happier if we fix bugs, at other times they prefer to see a new feature brought to market earlier. The answer depends on what drives their business. Do they prefer that we fix a bug that costs them an extra hour of work per day or that we launch a new feature that will allow them to grow their business by 10% in 6 months?

Notes from SF Data Mining Meetup: Recommendation Engines

Excellent talks on each of the presenting companies approach the design of their recommendation engines based on the specifics of their markets and users

Recommendation Engines

Thursday, Apr 4, 2013, 6:30 PM

Pandora HQ
2101 Webster Street, Suite 1650 Oakland, CA

200 Data Scientists Went

6:30 – 7:00pm Social and Food7:00 – 8:30pm Talks**8:30 – 9:00pm SocialWe’re excited to have three sets of speakers:1. Trulia: Todd Holloway will be giving a talk on Trulia Suggest.2. Rich Relevance: John Jensen and Mike Sherman will be giving their perspectives on recommendation engines.3. Pandora: Eric Bieschke will be giving his perspec…

Check out this Meetup →

Here are my notes on their respective technology stacks. Hadoop, Hive, Memcached, Java are used by all 3.

1. Trulia: Todd Holloway on Trulia Suggest.

  • Hadoop
  • Hive
  • R on each Hadoop Server
  • Memcached
  • Java

2. Rich Relevance: John Jensen and Mike Sherman

  • Hadoop
  • Hive
  • Pig
  • Crunch

Starting to deploy

  • Kafka
  • Storm

3. Pandora: Eric Bieschke

  • Python. Hadoop. Hive for  Offline processing
  • Memcached. Reddis: for near line & online
  • Java & PostgreSQL for online

Memcached: Used as key-value store in the sky  as long as you don’t care about losing data

Reddis: “Persistent Memcached”