Amazon S3: No More Flat-rate Pricing

What does it mean when a vendor adopts flat-rate pricing, but later moves away from it? Does this mean the vendor misjudged the market initially or wised up and found new ways to exploit their customers? The truth is both more and less complicated than any of these explanations.
Amazon is in the news because they announced that as of June 1, they are introducing tiered pricing for parts of their S3 web services. Amazon’s announcement states that many customers will pay less and some will pay (a lot) more. To understand what is going on, it helps to understand a little about pricing theory and practice.
Flat-rate pricing is simple
For new products, and especially those entering into new markets–like S3 for the past year–flat-rate pricing often makes the most sense. It is simple and you may not know enough to properly segment the market for anything more complicated. Flat-rate pricing, however, limits your ability to maximize revenue because everyone pays the same price. Flat-rate pricing is also a poor pricing methodology when you have fixed capacity but variable demand. For example, flat-rate pricing for electricity is terribly inefficient; the power company can only produce a fixed amount of electricity that it cannot store, so it has a lot of wasted capacity during off-peak demand, and yet everyone pays the same rate. This is why electric utilities encourage time-of-day pricing, and demand it of large, peak-hours users such as industrial customers. Higher prices during peak hours discourage use when generation is near capacity and lower prices during off-peak hours encourage use when generating capacity would otherwise be idle. We’ll see how this relates to Amazon’s S3 web services a bit later.
Capturing more revenue
Once you know more about your customer base or have established yourself in the market, businesses like to turn their attention to maximizing what people will pay, or as those of us trained in the “dismal science” of economics like to say, minimize the consumer surplus. For a lengthy and humorous look at pricing, read Joel Spolsky’s classic 2004 article on how to price software.
The consumer surplus is the amount that some customers would be willing to pay in excess of the current price at a given point on the demand curve. Figuring out how to best capture this potential revenue in a world of imperfect information is one of the great challenges of pricing. Airline fares are the classic example of complex pricing discrimination in pursuit of the consumer surplus.

The most common pricing schemes for capturing the consumer surplus are tiered and usage-based pricing. The per-unit price you pay depends on how much you use, or in the utility example above, when you use it. The per-unit price may go up or down with volume, depending on the provider’s needs and intent. In the typical case where there are economies of scale, the marginal price will decrease in the form of volume discounts. When faced with scarcity or limited capacity, however, the marginal price will increase. Once again, keep reading and I’ll tie this back to Amazon’s S3 announcement.
Tiered-pricing
Tiered pricing is a vehicle for pricing discrimination based on features, components, channel used, or customer type, and does a good job of matching the price paid to the products or services provided. Some examples you might encounter everyday include: Windows Vista (six versions), many types of enterprise software (nearly infinite variations), pizza toppings, and construction. The goal of tiered pricing is to drive toward more granular tiers to the point–in theory at least–where everyone is charged exactly what they are willing to pay–no more and no less. This concept is the pricing corollary to my colleague Nilofer Merchant’s article on “natural segments” where marketers are able to target individual users.
Usage-based pricing
An alternative to tiered pricing is usage-based pricing. There are lots of ways to measure usage, but fundamentally it is tied to something that can be measured: time, volume, etc. In its simplest form (using some aggregate measure), its objective is no different than tiered pricing, that is, to minimize the consumer surplus from those willing to pay more.
If production is constrained–that is, there are not direct economies of scale–usage-based pricing takes on the additional role of helping to optimize capacity. Power plants, airlines–and server farms for that matter–are capital intensive resources with high fixed costs that the provider incurs whether they are operating or not. Worst of all, you need to maintain enough capacity to serve peak usage, meaning that most of the time you have expensive capacity sitting idle. Time-of-day pricing, a form of usage-based pricing, is an excellent way to get peak users to pay a premium while encouraging off-peak users to use more when capacity would otherwise be idle.
Revisiting Amazon’s S3 announcement
Let’s take a look at what Amazon did with the pricing for S3, and what it tells us about their thinking. There were four major components to the announcement:

  • De-aggregation of the S3 components: bandwidth, system requests, and storage
  • No change to storage fees
  • Tiered fees (volume discounts) for bandwidth
  • A new fee for system requests

Storage is easy to optimize, so flat-rate pricing makes sense. Storage also is pretty cheap, so not much impact.
Offering volume discounts for bandwidth are not too painful for Amazon as it realizes economies of scale on the bandwidth it buys. Heavy users of S3 probably were pressuring Amazon on this front. They state this in their announcement as justification for the move. Amazon can maintain margins even while lowering the marginal price. Pretty close to a win-win.
The new fee for system requests is the interesting part of the announcement. Servers are capital intensive and, outside of Moore’s Law, generally not subject to economies of scale. Amazon wants its servers working hard all the time.
There are two ways to optimize server utilization.
Discriminate based upon on- or off-peak usage. That is, encourage off-peak use or discourage peak use of the servers. Variable pricing for eCommerce services based on time-of-day would be enormously complex and probably a non-starter for Amazon’s web services customers, so this option is out.
Since peak-time pricing won’t work, Amazon adopted the next-best alternative of managing just the high side.
Discriminate against peak-users in general. Since Amazon cannot encourage off-peak usage, they are forced to lower the peak usage via a blunter club: raise costs for the heaviest users. This suggests that the biggest users of S3 also create the largest spikes in server transactions. I assume that Amazon has data to back this up, but it is not rocket science to make this leap.
In summary, I read Amazon’s announcement as indicating they are committed to the market, looking to the future and understand what they are doing–at least from a pricing standpoint. Specifically, from the announcement, I infer:

  • They feel the web services market is maturing, and thus can support something more than basic, flat-rate pricing
  • They are at or thinking about hitting server capacity, and need to manage for this situation.
  • They have thought about the pricing and capacity issues. This doesn’t mean they won’t screw up later, but they’ve cleared the first hurdle and understand the appropriate pricing model is that of a utility, not a bookseller.

No replies

  1. Is it just me, or is that diagram copied from the Wikipedia article on “Economic surplus”?
    Great article otherwise.

Leave a reply

Leave a Reply