scalability for dummies

I think scalability is the most sexiest feature of the cloud computing! You don't have any computer and at the same time you have millions of computers which your application scales! I think for startups a cloud platform is a must since you cannot buy many servers in the beginning of your company! In short, using scalability is the cheapest way to make your application to reach lots of people. Before you read further please keep in mind, in this article I will introduce you vertical scalability.

Let's look at the main pros of the cloud platform:

  • You can scale your application in minutes
  • Saves you many installation and configuration, like you don't need to know how to install and configure load balancer
  • You pay as you go, you don't need much money to buy lots of computers
  • It gives you many options for scaling options
  • It increases your applications availability
  • It's all automatic, scaling up when needed and down when unneeded
  • Replace if there are problems with the instance
  • Platform let's you increase your application performance with very fast network performance
what is scalability

Wikipedia: Capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged in order to accommodate that growth. Wolfram

Example: You can build an application named Twitter which let's you send 140 chars messages. It's super easy, right? You developed the app and released it which runs on a super computer with 8 gigabyte Ram and 4 CPUs. Sounds good, thousands of people can use your application without any problem (Assume that there is no electricity/internet cut or no hardware problem etc). But what will you do when your system is used by millions? You need to hire a system admin and increase your computer, so system admin runs the application over multiple computers. This is scalability.

AWS is the leading cloud computing platform let's you create new instances of your application and scale them in seconds. Which means, in Twitter example, it's the new computers and system admin. You don't need to hire a system admin for the maintenance and you don't need to buy a new computer for life, you can hire computers when you need extra computer.

why scaling?

In 2014 one of our customer wanted us to build a highly scalable news website . So we started to build our application's architecture and we decided to use AWS for cloud platform and we finished the project in 8 weeks. If you want to make your application highly scalable, you need to respond all requests 7/24 and it was the biggest challenge. If you need to be reachable all the time you need scaling. We succeeded this requirement easily with AWS, it was super easy. You can't have enough budget to buy new computer every time to build a startup to reach too many people, so you have to use cloud computing to use scalability. So our customer decided not to invest on building their server and let's us to use AWS and scale for the application.

which cloud?

I personally use AWS for my POCs and personal projects. I first used AWS in 2011 for a social media project to upload user's avatar to S3 and after that I started to use other features of AWS. I like the features, pricing, SDK and UI of AWS very much. It's so easy to use the SDK (I tried Java SDK) in the code and UI is so clear, you don't waste time to find what you are looking for. I will introduce you AWS in other article. Let's dive into scalability.

AWS ElasticBeanstalk's scalability

AWS let's you to configure the scalability with different features, scaling triggers. So you can find which parameter is important for you and you can set up scalability easily. For example, in a project I made a load test to the application and I figured that if the server receives more than 2000 requests, server started to loose it's high performance. This give me clue and I set the scaling trigger to scale up 1 instance if the server receives more than (2.000 * 60) 120.000 requests in 1 minute. Here is a AWS white paper about infrastructure at scale, it's a very good reference.

scalability parameters

Here are the AWS ElasticBeanstalk scalability parameters, if you have questions about these parameters, feel free to ask me.

  • Trigger Measure: Trigger measures: CPUUtilization, NetworkIn, NetworkOut, DiskWriteOps, DiskReadBytes, DiskReadOps, DiskWriteBytes, Latency, RequestCount, HealthyHostCount, UnhealthyHostCount
  • Trigger Statistics: Minimum, Maximum, Sum, Average
  • Unit of Measurement: Seconds, Percent, Bytes, Bits, Count, Bytes/Second, Bits/Second, Count/Second, None
  • Measurement Period: Specifies how frequently Amazon CloudWatch measures the metrics for your trigger.
  • Breach Duration: Amount of time a metric can be beyond its defined limit (as specified in the UpperThreshold and LowerThreshold) before the trigger fires. Valid 1 to 600
  • Upper Threshold: If the measurement is higher than this number for the breach duration, a trigger is fired. 0 to 20000000
  • Upper Breach Scale: How many Amazon EC2 instances to add when performing a scaling activity.
  • Lower Threshold: If the measurement falls below this number for the breach duration, a trigger is fired. 0 to 20000000
  • Lower Breach Scale: How many Amazon EC2 instances to remove when performing a scaling activity.

You can play with these parameters to set your scalability features to reach your high availability goals. Let's make some examples to configure the scalability.

  • Example 1- Scaling with respect to CPU Utilization: This is the most common use case if your application is running algorithms or making extreme calculations. Let's say your requirement is scaling up 2 instances if CPU utilization average in a minute is over 70% and scaling down 2 instances if CPU utilization is lower than 20%. First you would choose Trigger Measure CPU Utilization and Trigger Statistics to Average. Then set Measurement Perioand and Breach Duration to 1. Now Amazon Cloud Watch knows to measure your app's cpu utilization. And for the last step, we will configure Upper Threshold to 70, Upper breach scale to 2, Lower Threshold to 20 and Lower breach scale to -2. That's all, configuration is ready! Scaling AWS CPU Utilization

  • Example 2- Scaling with respect to Request Count: This configuration can be used if your app responses a lot of requests. Let's assume if your app performance decreases over 11.000 requests in a minute and you want to add an instance after 10.000 requests in a minute and remove an instance if number of requests is less than 1.000 requests. It's so easy to set this configuration with AWS. Set Trigger Measurement to Request Count, Trigger Statistics to Sum, Unit of Measurement to Count to set Amazon Cloud Watch to measure your app's incoming requests. Set Upper Threshold to 10.000 and Upper Breach Scale to 1, Lower Threshold to 1.000 and Lower Breach Scale to -1. So Elastic Load Balancing will watch your app's request count to decide to add or remove an instance of your application. Scaling AWS Request Count

  • Example 3- Scaling with respect to Network Traffic: It's also very easy to configure your app's scalability with network out performance. As an example you can watch your app's network performance with CloudWatch to calculate scale down and scale up threshold. This example shows average network out is under 1 mbyte and sets 2 mbytes for scaling down and 1 mbyte for scale up point. Example of metrics Let's assume you want to add an instance when 8 mbytes/sec network out performance and remove an instance 0.5 mbyte/sec. You would set configuration as shown below. Scaling AWS Network Out

Overall, even if you haven't used scalability yet, exploring these configurations make you start asking yourself new questions about the architecture of your apps and will courage you try new things. I strongly recommend you to build a simple application and try scalability. You may find more details here and feel free to comment!


TAGGED IN work, aws, cloud computing