It’s surprising how the volume of data is changing around the world, in the Internet. Who would have thought 10 years ago, that in future a physical experiment will generate 25 petabytes (26 214 400 GB) of data, yearly? Yes, I’m looking at you, LHC. Times are changing, companies are changing. Everyone is designing for scale a tad different. And that’s good, it’s important to design for the right scale.

Please note: the views I express are mine alone and they do not necessarily reflect the views of Amazon.com.

Over scaling

Let’s assume you are a startup building a brand new, unique CMS (unlike the 2000 other ones). Is it worth thinking about NoSQL/Cassandra/DynamoDB/Azure Blob Storage? Probably not. It’s safe to assume that most of data will fit into one small or medium SQL database. When performance problems will appear, that’s good. It means your startup is working (or you just don’t know SQL…). That means you have clients, paying clients. Also, at that point you will have probably completely different idea about the system - you gone from “no clients and your imagination only” state to - “working product with customers and proven solutions” state. You can reiterate over your architecture now. Hopefully you have founds now. What I’ve heard multiple times is failing a startup because someone created complicated, scalable system for 321 000 clients. All the money is spent on IT, none on business. Total failure.

No need for scaling

Now, some systems don’t have to scale, or the requirements with scale progress slower than the progress of the new hardware (so effectively they fall into the first category). Probably some ERP systems for most medium sized companies are good example. Large SQL database, maybe an Azure Blob Storage/DynamoDB and all scalability problems are solved.

Some scaling needed

As I mentioned before, sometimes throwing an NoSQL database into the “ecosystem” solves the problem. Unfortunately for us, geeks, that’s usually the case.

Scaling definitively needed

There are times when people say “Azure Blob is highly scalable”. Well, that statement is a joke. Azure Storage isn’t scalable at all. Theirs 20 000 Requests Per Second limit sometimes might be a only a tiny part what you need. Furthermore, there are other, hard limits: Azure Storage Scalability and Performance Targets. To be fair, DynamoDB has limits too. However, you can contact support and request as much throughput as you need. There is one more catch too - pricing. In Azure you pay for requests amount (not throughput) + storage, in DynamoDB for provisioned throughput + storage. Depending on your use case, one might be much cheaper than the other.

Unique scale

Finally, there are times when you need to build a new solution and even have a dedicated team for your challenge. Let’s imagine you work at a big company, your company has hundreds of thousands of services, each service is called many thousands of times per second, and each call is generating logs. You want a solution to store the logs for months and search within them. The scale is unusual, and it’s expected the number of calls will grow 30% year over year, having 4x more traffic on some days. This time, you can probably start thinking about your own, new storage engine, strictly coupled to your needs.


TL;DR: covered some scalability levels, turns out everyone should scale differently.