Want to Predict your Cost in the Cloud? Roll Up Your Sleeves!

 

The selection of a cloud service provider is a critical decision for any a software service provider. Cost is, naturally, a key driver in this selection. However, predicting the cost of running servers in the cloud is a project in, and of, itself, because the only way to build a reliable model of costs, is to go ahead and deploy our systems with the service providers.

 

Why is not possible to forecast costs with pen and paper?

The main reason that pricing is so hard to forecast is that our system architecture in the cloud will likely be different from the one currently running in our own datacenter: the server configurations are different, the networking is different, and most likely we want to take advantage of the new features that come “for free” with a deployment in the cloud: higher availability, geographical redundancy, larger scale, etc. We’ll cover this in details in an upcoming post.

 

Another reason why it is hard to predict costs is that we don’t really know what we are getting:

When one considers the primary attributes of a server: RAM, CPU, storage, I/O (network bandwidth) – only RAM and storage capacity are guaranteed by cloud vendors. Vendors provide varying degrees of specificity about CPU and other key characteristics. Amazon defines EC2 Compute Units: “One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor”. Rackspace’s price sheet categorizes servers by available RAM and disk space (the more RAM, the more disk space). Their FAQ mentions the number of virtual cores each server receives, based on the amount of RAM allocated, but I could not find their definition of a virtual core. GoGrid, or Joyent provide similarly limited information.

 

As a side note, one needs to be aware that vendors typically refer to “virtual cores” – as opposed to real (physical) cores. A virtual core corresponds to one of the two hyperthreads that run on modern Intel processors since 2002. Conversely, a server with a quad-core Intel Xeon processor runs 8 virtual cores. You can read this 2009 post, plus the comments thread, for more specifics. While the data is dated, the observations are still relevant.

 

So, there is a lot that we don’t know about the servers on which we will run our system: CPU clock, size of LI, L2 RAM, I/O bus speed, disk spindle rotation speed, network card bandwidth, etc.

Furthermore, performance will vary across servers (since cloud vendors have a diverse park of servers of different age) and thus, each time a new image is deployed, it will land on a random server, with the same nominal specs (RAM, storage), but unknown other physical characteristics (CPU clock, I/O bandwidth, etc).

 

Another well-documented problem is that of noisy neighbors. While the hypervisors do a fairly good job at controlling allocation of CPU and memory, they are not as effective at controlling the multitude of other factors that affect performance. I/O in particular is very sensitive to contention. While VMware affirms that vSphere solves this problem, most (all ?) cloud vendors use open source hypervisors.

In any event, this problem is systemic and cannot be solved by the hypervisor. For example, we did a lot of research on the best configuration for our Cassandra servers (database for big data). One of the main performance optimizations driving Cassandra’s design is to maximize “append” (rather than update) operations, thus minimizing random movement of the read/write head of the disk, and thus maximizing disk I/O. Unfortunately, all this clever optimization goes out the window if we share the server – and thus the disks – with a noisy neighbor who is performing random read-write operations. I had the chance to discuss this a couple of months ago with members of the Cassandra team at Netflix (one of the largest users of Cassandra and almost 100% deployed on Amazon): they solve the issue by only using m2.4xlarge instances on AWS, which (today) ensures that they are the only tenant on the physical server – and don’t have any noisy neighbor.

 

Adding all this together makes it pretty clear that vendor comparison on paper is practically fruitless.

Let’s Try It Out

The only practical way to create a realistic budget forecast is to actually deploy systems on the selected cloud vendor(s) and “play” with them. Here are some areas to investigate and characterize, beyond simply validating functionality:

  • Optimal server configuration for each server role (web, database, search, middle tier, cache, etc). We need to make sure that each server role is adequately served by one of the configurations offered by the vendor. For example, very few offer servers with more than 64 GB of RAM
  • Performance at scale (since we only pay for the servers we rent, we can run full-scale performance tests for a few hours or days at relatively low cost – e.g. a few hundred dollars) – Netflix tested Cassandra performance, “Over a million writes per second”, on AWS for less than $600 and clusters as large as 288 nodes
  • End-to-end latency (measured from an end-user perspective) – since latency will be impacted by the physical distribution of the servers
  • Pricing model

 

For these tests to be meaningful, one needs to ensure that deployments are realistic: for example, across service zones and regions, if we plan on leveraging these capabilities – as they impact not only performance (due to increased network latency) but also pricing (data transfer charges).

 

In addition, each test must be run several (10 – 20) times – with fresh deployments – at different times of day – in order to have a representative sample of servers and neighbors.

 

As important as the technical performance validation, the pricing model must be validated as vendors charge for a variety of services in addition to the lease of the servers: most notably bandwidth for data transfers (e.g. across regions), but also optional services (e.g. AWS Monitoring or Auto-Scaling), as well as per operation fees (e.g. Elastic Block Store). The “per operation” fees can add up to very large amounts, if one is not careful. For example, see the Amazon SimpleDB price calculator – we have to run SimpleDB under real load in order to figure out what numbers to plug in. Overlooking this step can be costly.

 

Once the technical tests have been completed, and the system configuration validated,

I recommend at least a full billing cycle of simulated operations, in order to obtain an actual bill from the vendor from which we can build our pricing model.

Leave a Reply

Discover more from Software Engineering - from the Trenches

Subscribe now to keep reading and get access to the full archive.

Continue reading