Tuesday, January 27, 2015

Data Transfer Charges incurred on AWS

The following Data Transfers are FREE of charge:-
  • Data Transfer into AWS from the internet
  • Data Transfer between EC2 instances and S3
  • Data Transfer between EC2 instance and EBS
  • Data Transfer between EC2 instances in the same Zone (within a Region)
  • Data Transfer between AWS Regions and our CDN Network.
Data Transfers that result in a charge:-
  • Between Regions.
  • Between Zones within a Region at the rate of $0.01 per GB.
  • Out of AWS to the Internet.


Saturday, January 24, 2015

mysql admin command that shows no. of active connections

$mysqladmin -u -p extended-status | grep -wi 'threads_connected\|threads_running' | awk '{ print $2,$4}'

Monday, January 19, 2015

s3cmd needs "s3:ListAllMyBuckets" action to be allowed during initial configuration

If we want to create restrictive IAM user policy that allows only bucket upload, then the below policy will work

{
  "Id": "Policy1421692784857",
  "Statement": [
    {
      "Sid": "Stmt1421692783042",
      "Action": [
          "s3:PutObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::root/*",
      "Principal": {
        "AWS": [
          "arn:aws:iam::<account_name>:user/<bucket_name>"
        ]
      }
    }
  ]
}

You could use a policy similar to above and use aws cli to upload to the bucket. However, if you are using s3cmd utility, then the initial configuration requires s3:ListAllMyBuckets" action to be allowed as for "s3cmd --configure" to work correctly. However, per AWS documentation "ListAllMyBuckets" cannot be specified for a particular bucket - "The following example user policy grants the s3:CreateBuckets3:ListAllMyBuckets, and the s3:GetBucketLocation permissions to a user. Note that for all these permissions, you set the relative-id part of the ResourceARN to "*". For all other bucket actions, you must specify a bucket name. For more information, see Specifying Resources in a Policy."

Instead your IAM policy should look like

{
  "Statement": [
    {
      "Sid": "Stmt1421700917471",
      "Action": [
        "s3:ListAllMyBuckets"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::*"
    },
    {
      "Sid": "Stmt1421701147331",
      "Action": [
        "s3:PutObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::<bucket_name>/*"
    }
  ]
}

Tuesday, January 13, 2015

Using AWS CLI to stop/start instances from the command line

Step 1:- 

Get the instance id's of the running instances

$aws ec2 describe-instances --filters Name=tag-key,Values="Name" --profile apix --output table --query "Reservations[*].Instances[*].InstanceId"

-------------------
|DescribeInstances|
+-----------------+
|  i-6e1b573e     |
|  ....               |
|  i-06ad8956     |
+-----------------+

The above will return an instance id(s)

Step 2:-

With the instance id's we can issue stop/start commands

$aws ec2 stop-instances --instance-ids=<your instance id(s)> --profile <your profile>


{
    "StoppingInstances": [
        {
            "InstanceId": "<instance-id>",
            "CurrentState": {
                "Code": 80,
                "Name": "stopped"
            },
            "PreviousState": {
                "Code": 80,
                "Name": "running"
            }
        }
    ]
}

Step 3 :-

We can start the instances using start-instances command, 

$aws ec2 start-instances --instance-ids=<your instance id(s)> --profile <your profile>


{
    "StartingInstances": [
        {
            "InstanceId": "<instance-id>",
            "CurrentState": {
                "Code": 0,
                "Name": "pending"
            },
            "PreviousState": {
                "Code": 80,
                "Name": "stopped"
            }
        }
    ]
}



Saturday, January 10, 2015

Use gzip to achieve upto 80% compression on the mysqldumps

e.g. from link

Export with date:-

$mysqldump -u {user} -p {database} | gzip > `date -I`.database.sql.gz

Import from the export:-

$gzip -dc < `date -I`.{database}.sql.gz | mysql -u {user} -p {database}

Instance started from PV AMI's cannot be modified to new generation instance types

If you have long running instances from PV type of AMI, then when you encounter an instance retirement, you cannot modify the instance type to m3.* or c3.* because Amazon would like everyone to move to HVM based images. For more information regarding PV and HVM virtualizations, you can refer to this link

For now, if you have PV virtualization instance, then you can only modify the instance type to first generation instances in us-east-1:-


Surprisingly, m1.large is missing from the above list and the reason for that is that ami that was created from the running instance is a 32 bit instance. In order to migrate to an m1.large, the original instance would need to be migrated to a 64 bit instance first.

Wednesday, January 7, 2015

At the moment Route 53 does not support querying of last timestamp of DNS record update

In some DNS servers, we are able to query the last timestamp of the DNS records update using a "serial" attribute. For e.g.:-

*************
C:\>nslookup -q=SOA example.com
Server:  homeportal
Address:  192.x.x.x

Non-authoritative answer:
example.com
        primary name server = ns1.example.com
        responsible mail addr = hostmaster.ns1.example.com
        serial  = 2015010703
        refresh = 10800 (3 hours)
        retry   = 3600 (1 hour)
        expire  = 604800 (7 days)
        default TTL = 600 (10 mins)
**************

The above serial refers to the date and the number of updates done on that particular date (2015/01/07 and update # 3)

The above query does not work for a domain hosted on Amazon Route 53. The returned serial # is "1". Also, currently, Route 53 does not support CloudTrail so api calls to Route 53 do not get recorded. So the only way is to actually log a ticket with AWS Route 53 team and get the last DNS record update time stamp. 

Monday, January 5, 2015

httpd service restart error:(98)Address already in use: AH00072: make_sock: could not bind t o address 0.0.0.0:80

Recently, on one of the ec2 instance, there was a httpd restart problem, where the below error was thrown

***********
$ sudo service httpd restart
Stopping httpd:                                            [FAILED]
Starting httpd: (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs
***********

After checking for folder/file level permissions, I followed the suggestions made in one blog post. However, that did not help and I could not kill the httpd process. Later, I suspected that httpd.pid file was not writable. Turned out that that file did not exist (that is another investigation altogether). I created two files in <apache_home>/logs folder

$touch httpd
$vi httpd.pid
<pid>

Once I had done that then I was able to stop and start httpd process again.

Use of max_input_vars in php.ini of your php application to restrict DoS attacks

This particular property is commented out by default in php.ini file. This can be a useful variable to restrict GET, POST and cookie parameters and thereby restricting the size of the post to the application. For information, pl. refer to php docs

********php.ini*********
; How many GET/POST/COOKIE input variables may be accepted
; max_input_vars = 1000
max_input_vars = 4000
**********************