Friday, May 18, 2012

Amazon Elastic Load balancing with autoscaling

Advertisements

Suppose you have a website. linxhelp.in And suppose sometimes you get a traffic which your system cannot handle. What you will do? You will create multiple instances of your system and will configure a load balancer. But you have to keep both instances running even if there is not much traffic. We will discuss how to eliminate this resource wastage using amazon elastic load balancer and amazon autoscaling. For using this, your instances should be configured in amazon elastic cloud or ec2.



In this example we will use the domain linuxhelp.in and our test instances are in centos linux. For doing this you will need amazon web services account.

Objective :
Create new instance based on the AMI created beforehand when CPU load increases beyond 80% and distribute the load between the old and newly created instances.
Terminate the instance when the CPU load falls below 40% thus making only one instance running while the load is less.

Requirements :
Amazon web services account
Amazon elastic cloud EC2
Amazon Elastic Load Balancer
Amazon Detailed Cloudwatch
Amazon AutoScaling
Amazon Cloudwatch command line tools
Amazon AutoScaling command line tools
Amazon ELB command line tools
DNS configuration access

Steps :
ELB:
Create an Amazon Elastic Load balancer
Configure health Check

AutoScaling:
Create a Launch configuration
Create a Auto Scaling Group

Cloudwatch:
Create Policy to Scale up the instances.
Create Alarm on Average CPUUtilization with threshold > 80 which triggers scale up policy.
Create Policy to Scale down the instances.
Create Alarm on Average CPUUtilization with threshold < 40 which triggers scale down policy.

DNS:
Point the domain to ELB

Create an amazon Elastic load balancer

Now we will see every steps in detail:
First we need to create an amazon Elastic load balancer. We can do this using the following command. In this we need to specify the port, protocol and the availability zones.
elb-create-lb MyELB --headers --listener "lb-port=80,instance-port=80,protocol=http" --availability-zones us-east-1a,us-east-1d

Then we need to specify the health check details to determine which instance is healthy or not.
elb-configure-healthcheck MyELB –target “HTTP:80/” –interval 60 –timeout 5 –unhealthy-threshold 3 –healthy-threshold 5

We can check the status of the elastic load balancer as
[root@Server ~]# elb-describe-lbs MyELB -C cert-id.pem  -K pk-id.pem
LOAD_BALANCER  MyELB  MyELB-2114043205.us-east-1.elb.amazonaws.com  2012-05-12T05:36:37.180Z

Autoscaling:

Creating the launch configuration:

Launch configuration guides aws to which AMI to be used for spawning a new instance, instance type, Security group etc.

[root@Server ~]# as-create-launch-config MyLaunchC --image-id ami-12345 --instance-type m1.small --group Basics --key cdnkey -C cert-id.pem -K pk-id.pem
OK-Created launch config
[root@Server ~]#

Creating autoscaling group:

We have to create a AutoScaling group for the instances to run. In this we can specify minimum and maximum instances, availability zones, and ELB name to associate with.

[root@Server ~]# as-create-auto-scaling-group MyScalingGroup --launch-configuration MyLaunchC --availability-zones us-east-1a,us-east-1d  --min-size 1 --max-size 2 --load-balancers MyELB -C cert-id.pem -K pk-id.pem
OK-Created AutoScalingGroup
[root@Server ~]#

Describing the autoscaling group status :
We can check the status of the autoscaling groups with following command. To check how many instances are running, their health etc.
[root@Server ~]# as-describe-auto-scaling-groups --headers -C cert-id.pem -K pk-id.pem
AUTO-SCALING-GROUP  GROUP-NAME        LAUNCH-CONFIG  AVAILABILITY-ZONES     LOAD-BALANCERS  MIN-SIZE  MAX-SIZE  DESIRED-CAPACITY
AUTO-SCALING-GROUP  MyScalingGroup  MyLaunchC    us-east-1a,us-east-1d  MyELB    1         2         1      
INSTANCE  INSTANCE-ID  AVAILABILITY-ZONE  STATE      STATUS   LAUNCH-CONFIG
INSTANCE  i-8df54deb   us-east-1d         InService  Healthy  MyLaunchC
[root@Server ~]#

Creating the scale up policy :

In this we set a policy / action to be executed to increase the number of instances running(Response to the alaram when cpu utilization > 80)
By executing the command we get a policy arn which is to be noted.
[root@Server ~]# as-put-scaling-policy MyScaleUpPolicy --auto-scaling-group MyScalingGroup  --adjustment=1 --type ChangeInCapacity  --cooldown 300 -C cert-id.pem -K pk-id.pem
arn:aws:autoscaling:us-east-1:MyAwsID:scalingPolicy:arnid:autoScalingGroupName/MyScalingGroup:policyName/MyScaleUpPolicy
[root@Server ~]#

Creating the high CPU Utilization alarm :

Here we create an alarm when the average cpu utilization of the instance increase more than 80%. In the command we have to specify the policy arn (Policy to increase the number of instance by 1) to be called when the alarm occurs.
[root@Server ~]# mon-put-metric-alarm MyHighCPUAlarm  --comparison-operator  GreaterThanThreshold  --evaluation-periods  3 --metric-name  CPUUtilization  --namespace  "AWS/EC2"  --period  60  --statistic Average --threshold  80 --alarm-actions arn:aws:autoscaling:us-east-1:MyAwsID:scalingPolicy:arnid:autoScalingGroupName/MyScalingGroup:policyName/MyScaleUpPolicy --dimensions "AutoScalingGroupName=MyScalingGroup" -C cert-id.pem  -K pk-id.pem
OK-Created Alarm
[root@Server ~]#

Creating the scale down policy :

In this we set a policy / action to be executed to decrease the number of instances running ( Response to the alaram when cpu utilization < 40%).By executing the command we get a policy arn which is to be noted.
[root@Server ~]# as-put-scaling-policy MyScaleDownPolicy --auto-scaling-group MyScalingGroup  --adjustment=-1 --type ChangeInCapacity  --cooldown 300 -C cert-id.pem -K pk-id.pem
arn:aws:autoscaling:us-east-1:MyAwsID:scalingPolicy:arnid:autoScalingGroupName/MyScalingGroup:policyName/MyScaleDownPolicy
[root@Server ~]#

Creating the low CPU Utilization alarm :

Here we create an alarm when the average cpu utilization of the instance falls below 40%. In the command we have to specify the policy arn (Policy to decrease the number of instance by 1) to be called when the alarm occurs.
[root@Server ~]# mon-put-metric-alarm MyLowCPUAlarm  --comparison-operator  LessThanThreshold --evaluation-periods  3 --metric-name  CPUUtilization --namespace  "AWS/EC2"  --period  60  --statistic Average --threshold  40  --alarm-actions arn:aws:autoscaling:us-east-1:MyAwsID:scalingPolicy:arnid:autoScalingGroupName/MyScalingGroup:policyName/MyScaleDownPolicy --dimensions "AutoScalingGroupName=MyScalingGroup" -C cert-id.pem  -K pk-id.pem
OK-Created Alarm
[root@Server ~]#

Testing:
First we will increase the cpu load and check whether new instance is created or not when the cpu utilization becomes more than 80%.
For increasing the CPU Load:
Run the following command from multiple terminals:
#echo "9999999^999999" | bc

Check the CPU usage using the following command:
#top -c

Check whether new instances are spawned using the following command or from web intercface of aws.
#as-describe-auto-scaling-groups MyScalingGroup --headers -C cert-KEY.pem -K pk-KEY.pem
Now kill all the processes started for increasing the load and check whether one of the instance is terminated.
Point the domain name to the Public DNS of Elastic Load Balancer and check whether the site is loading from the browser.
We have to create CNAME record in Godaddy.com (Or your registrar) account for this. For example,

CNAME:
create a cname record for "elb" points to "MyELB-2114043205.us-east-1.elb.amazonaws.com"


Recommended Reading

1. Host Your Web Site In The Cloud: Amazon Web Services Made Easy: Amazon EC2 Made Easy
2. Programming Amazon Web Services: S3, EC2, SQS, FPS, and SimpleDB
3. Middleware and Cloud Computing: Oracle on Amazon Web Services (AWS), Rackspace Cloud and RightScale (Volume 1)

1 comment:

  1. Hi

    This is the excellent and accurate tutorials I ever seen over a Google. I found many thread but all are partial and most of them did not explained why we use that command ...

    Awesome explanatory Keep good work.

    -Sidol

    ReplyDelete

Be nice. That's all.