Monday, December 29, 2014

ISO 8601 week numbering need correct date format to be specified

Since year 2015 starts on a Thursday, many of the programs which rely on epoch calculation returns the start of the year as monday of the week since the year starts on a thursday. So the start of the year 2015 is actually today, i.e, monday Dec 29th, 2014. You can use the below link for epoch conversion:-

epoch convertor

You can see the difference in format when you run the "date" command on linux machines:-

$ date
Mon Dec 29 20:14:41 EST 2014

$ date -u "+%Y"
2014

$ date -u "+%G"
2015

You can see the same behavior with the below

..
Format f = new SimpleDateFormat("yyyy");
System.out.println("year : " + f.format(new Date()));
..

returns "2014", whereas

...
Format f = new SimpleDateFormat("YYYY");
System.out.println("year : " + f.format(new Date()));
...

returns "2015", so we have to check our scripts and programs to see if we are using the correct format string for returning dates (i.e., not use YYYY or %G)

Sunday, December 28, 2014

Elementary OS ppa repository for ubuntu 12.04 (precise)

Add the below links to your ubuntu 12.04 system's software repositories, i.e., /etc/apt/sources.list. Once you add you will have to run "apt-get update" and then proceed with "apt-get install elementary-desktop"

deb http://ppa.launchpad.net/elementary-os/stable/ubuntu precise main 
deb-src http://ppa.launchpad.net/elementary-os/stable/ubuntu precise main 

Thursday, December 18, 2014

Amazon SES service interruptions in US-EAST at around 8:42PM PST

We started noticing some SMTP errors in some of the applications that use SES to send outbound messages with the stack below:-


javax.mail.AuthenticationFailedException: 454 Temporary authentication failure
        at com.sun.mail.smtp.SMTPTransport$Authenticator.authenticate(SMTPTransport.java:892)
        at com.sun.mail.smtp.SMTPTransport.authenticate(SMTPTransport.java:814)
        at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:728)
        at javax.mail.Service.connect(Service.java:386)
        at javax.mail.Service.connect(Service.java:245)
        at javax.mail.Service.connect(Service.java:194)
        at javax.mail.Transport.send0(Transport.java:253)
        at javax.mail.Transport.send(Transport.java:124)


the timestamps of this error correspond to service interruptions noted on amazon status dashboard page:-





Wednesday, December 10, 2014

Some implementations of TLS may be susceptible to POODLE vulnerability

There is a new advisory that some implementations are suspectible to poodle attack with TLS downgrade to SSL:-

https://www.us-cert.gov/ncas/alerts/TA14-290A

 If you have created load balancers created after 10/14/2014 5:00 PM PDT disable SSLv3 protocol by default (they will not allow TLS to fall back to SSLv3, and are therefore not vulnerable to POODLE). For load balancers created before this date, you can manually change the protocols in use: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/configure-ssl-ciphers.html

 You'll just need to make sure your load balancer only supports TLS protocols. The easiest way to do this is to use the predefined policy "ELBSecurityPolicy-2014-10".

Monday, November 24, 2014

Managing MFA tokens for AWS console logins

As per security best practices recommendations, it would be best to turn on multi-factor authentication for console logins for all the IAM users. However, there are some operational overheads when IAM users upgrade their virtual MFA devices (iPhone, Android etc). The recommended option is for root user to follow the below steps:-

****************

  • Deactivate MFA from the user(s) account.  If the user did not deactivate MFA prior to getting a new phone the root AWS account can do this: http://docs.aws.amazon.com/IAM/latest/UserGuide/DeactivateMFA.html
  • Remove the previous MFA token from the virtual MFA application.
  • Re-activate MFA on a user(s) account: http://docs.aws.amazon.com/IAM/latest/UserGuide/GenerateMFAConfigAccount.html


****************

Though not recommended, there is a way to share the token across old and new devices. In iPhone for example, we can backup the entire contents of the old iPhone to iTunes or iCloud and then perform a restore from backup on the new phone. This will also save the MFA tokens for apps like Google authenticator (till the date that backup was made).


Monday, November 17, 2014

Turning of php execution in certain Apache httpd directories to prevent remote execution

In cases where you have a php site hosted on Apache httpd, then you can disable php execution by setting the below directive in your httpd.conf and restarting httpd.

***********
<Directory "/var/www/upload">
php_flag engine off
</Directory>
***********

Saturday, November 15, 2014

Additional new services announced on second day of AWS re:Invent 2014 keynote by Werner Vogels


  • Amazon EC2 Container Service
    Amazon EC2 Container Service is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run distributed applications on a managed cluster of Amazon EC2 instances. Amazon EC2 Container Service lets you launch and stop container-enabled applications with simple API calls, allows you to query the state of your cluster from a centralized service, and gives you access to many familiar Amazon EC2 features like security groups, EBS volumes and IAM roles. 
    Learn more about Amazon EC2 Container Service » 
  • AWS Lambda 
    AWS Lambda is a compute service that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information. You can also use AWS Lambda to create new back-end services where compute resources are automatically triggered based on custom requests. AWS Lambda starts running your code within milliseconds of an event and scales automatically from a few requests per day to thousands per second. 
    Learn more about AWS Lambda » 
  • Coming Soon: C4 instances 
    C4 instances represent the next generation of Amazon EC2 Compute-optimized instances. C4 instances are based on Intel Xeon E5-2666 v3 (Haswell) processors that run at a high clock speed of 2.9 GHz, and are designed to deliver the highest level of processor performance on EC2. C4 instances are ideal for running applications, gaming and web servers, transcoding, and high performance computing workloads. 
    Read the C4 instances blog post » 
  • Coming Soon: Larger, Faster EBS Volumes 
    We will be increasing the performance and maximum size of General Purpose (SSD) and Provisioned IOPS (SSD) volumes. You will be able to create volumes of up to 16 TB and 10,000 IOPS for Amazon EBS General Purpose (SSD) volumes and up to 16 TB and 20,000 IOPS for Amazon EBS Provisioned IOPS (SSD) volumes. General Purpose (SSD) volumes will deliver a maximum throughput of 160 MBps and Provisioned IOPS (SSD) volumes will deliver 320 MBps, when attached to EBS optimized instances. 
    Read the Amazon EBS blog post »
  • Amazon S3 event notification 
    Amazon S3 can now send event notifications when objects are uploaded to Amazon S3. Notification messages can be sent through either Amazon SNS or Amazon SQS, or trigger AWS Lambda functions. 
    Learn more about Amazon S3 event notifications » 
  • Amazon DynamoDB Streams 
    Amazon DynamoDB Streams provides a time ordered sequence of item level changes in any DynamoDB table. The changes are de-duplicated and stored for 24 hours. This capability enables you to extend the power of DynamoDB with cross-region replication, continuous analytics with Redshift integration, trigger AWS Lambda functions, and many other scenarios. 
    Learn more about Amazon DynamoDB Streams » 

Wednesday, November 12, 2014

New Services announced by Amazon in today's keynote at re:Invent 2014


  • Amazon RDS for Aurora
    Amazon Aurora is a MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora provides up to five times better performance than MySQL at a price point one tenth that of a commercial database while delivering similar performance and availability.
    Learn more about Amazon RDS for Aurora » 
  • AWS CodeDeploy 
    AWS CodeDeploy is a service that automates code deployments to Amazon EC2 instances. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate deployments, eliminating the need for error-prone manual operations, and the service scales with your infrastructure so you can easily deploy to one EC2 instance or thousands. 
    Learn more about AWS CodeDeploy » 
  • AWS Key Management Service 
    AWS Key Management Service (KMS) is a managed service that makes it easy to create and control the keys used to encrypt data. KMS is integrated with other AWS services including Amazon EBS, Amazon S3, and Amazon Redshift, making it simple to encrypt your data with encryption keys that you manage and providing you an audit trail through AWS CloudTrail. 
    Learn more about AWS Key Management Service » 
  • AWS Config 
    AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance. With AWS Config you can discover existing AWS resources, export a complete inventory of your AWS resources with all configuration details, and determine how a resource was configured at any point in time. These capabilities enable compliance auditing, security analysis, resource change tracking, and troubleshooting. 
    Learn more about AWS Config »
  • AWS CodeCommit 
    AWS CodeCommit is a secure, highly scalable, managed source control service that hosts private Git repositories. CodeCommit eliminates the need for you to operate your own source control system or worry about scaling its infrastructure. You can use CodeCommit to store anything from code to binaries, and it supports the standard functionality of Git allowing it to work seamlessly with your existing Git-based tools. Your team can also use CodeCommit’s online code tools to browse, edit, and collaborate on projects. 
    Learn more about AWS CodeCommit » 
  • AWS CodePipeline 
    AWS CodePipeline is a continuous delivery and release automation service that aids smooth deployments. You can design your development workflow for checking in code, building the code, deploying your application into staging, testing it, and releasing it to production. You can integrate 3rd party tools into any step of your release process or you can use CodePipeline as an end-to-end solution. CodePipeline enables you to rapidly deliver features and updates with high quality through the automation of your build, test, and release process.
    Learn more about AWS CodePipeline » 

Thursday, October 30, 2014

No SSD drives in us-east-1b, looks like a constrained zone issue again

For instances started in us-east-1b, we cannot choose SSD volumes for disk:


Surprising that m3 instance type is not available in a constrained zone us-east-1b

In region, there is a constrained zone (which can vary per AWS a/c) and some of the instances types are not available. In this particular case, us-east-1b does not have even m3 type instances:-


Looks like m1.xlarge instance types are no-longer available in us-east-1c zone

It appears that Amazon have started phasing out some of the first generation machines from some of zones in the older regions like us-east-1c. If you try to launch a first generation machine like m1.xlarge, you will see a launch error like below:-


Monday, October 27, 2014

Creating an IAM user policy to restrict user to add only ingress ports

The AWS CLI is not the most intuitive in terms of mandatory switches that it needs. To make matters worse, the IAM policy simulator is rudimentary in terms of functionally checking the policies. Recently, I had to enable a user to allow adding ingress rules to a particular security group in a vpc, the user policy looked like

*************
{
"Version": "2012-10-17",
  "Statement":[{
    "Effect":"Allow",
    "Action": [
       "ec2:AuthorizeSecurityGroupIngress",
       ],
     "Resource": "arn:aws:ec2:<region>:<aws_account>:security-group/<sg-group-name>",
        "Condition": {
        "StringEquals": {
          "ec2:Vpc": "arn:aws:ec2:<region>:<aws_account>:vpc/<vpc-id>"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": "ec2:DescribeSecurityGroups",
      "Resource": "*"
    }
  ]
}
*************

The above policy passed in the IAM policy simulator just fine. However, during runtime, using the below command

*************
$aws ec2 authorize-security-group-ingress --group-name <sg-group-name> --protocol tcp --port 80 --cidr <block ip> --profile <profile-name>

A client error (InvalidGroup.NotFound) occurred when calling the AuthorizeSecurityGroupIngress operation: The security group '<sg-group-name>' does not exist in default VPC '<default-vpc-id>'
*************

Since the user policy was created with resource arn pointing to <sg-group-name>, the above was assumed to work, but it does not. As per AWS doc, it is required to give security group id instead of name

So the above policy had to be modified to include security group id:

*************
....
"Resource": "arn:aws:ec2:<region>:<aws_account>:security-group/<sg-group-id>",
....
*************

Correspondingly, the aws cli command needs to pass security group id as well:

*************
$aws ec2 authorize-security-group-ingress --group-id <sg-group-id> --protocol tcp --port 80 --cidr <block ip> --profile <profile-name>
{
    "return": "true"
}
*************

Had to spend sometime troubleshooting this subtle difference in group id and group name, whereas policy simulator or aws cli can save folks some time if this difference is made obvious.

Monday, October 20, 2014

Disable Apache and PHP signature on external facing websites

For security reasons, you will want to disable Apache and PHP signature (versions) in external facing sites. When signature is not hidden you will see headers like

*************
HTTP/1.1 200 OK
Cache-Control: no-cache
Content-Type: text/html; charset=utf-8
Date: Mon, 20 Oct 2014 19:35:43 GMT
Pragma: no-cache
Server: Apache/2.4.9 (Unix) PHP/5.5.11
X-Powered-By: PHP/5.5.11
*************

To hide the signatures, you can make the following changes in your apache's httpd.conf and php.ini file

httpd.conf

**************
ServerSignature Off
ServerTokens Prod
TraceEnable Off

**************

php.ini

**************
expose_php = Off
**************

After the above changes, you will have to restart your httpd server. 

Thursday, October 16, 2014

CVE-2014-3513 - SSLv3 (poodle attack) Openssl memory leak can cause DoS attack

The new SSLv3 vulnerability found by Google researchers:-

http://googleonlinesecurity.blogspot.com/2014/10/this-poodle-bites-exploiting-ssl-30.html

has widespread implications for the servers that are ssl termination points. For now, they have to disable SSLv3 ciphers or protocol itself. Additionally, they have to check the reverse proxies to see if strong ciphers (TLS) can only be allowed.

Amazon have also released patches for Amazon Linux and instructions for ELB:-

http://aws.amazon.com/security/security-bulletins/CVE-2014-3566-advisory/

https://alas.aws.amazon.com/index.html

****************
Amazon Linux AMI:
The Amazon Linux AMI repositories now include patches for POODLE (CVE-2014-3566) as well as for the additional OpenSSL issues (CVE-2014-3513, CVE-2014-3568, CVE-2014-3567) that were released on 2014-10-15. Please see https://alas.aws.amazon.com/ALAS-2014-426.html and https://alas.aws.amazon.com/ALAS-2014-427.html for additional information.

Amazon Elastic Load Balancing:
All load balancers created after 10/14/2014 5:00 PM PDT will use a new SSL Negotiation Policy that will by default no longer enable SSLv3.
Customers that require SSLv3 can reenable it by selecting the 2014-01 SSL Negotiation Policy or manually configuring the SSL ciphers and protocols used by the load balancer. For existing load balancers, please follow the steps below to disable SSLv3 via the ELB Management
Console:
    1. Select your load balancer (EC2 > Load Balancers).
    2. In the Listeners tab, click "Change" in the Cipher column.
    3. Ensure that the radio button for "Predefined Security Policy" is selected
    4. In the dropdown, select the "ELBSecurityPolicy-2014-10" policy.
    5. Click "Save" to apply the settings to the listener.
    6. Repeat these steps for each listener that is using HTTPS or SSL for each load balancer.

****************************
You can also change the cipher suite policy in the ELB configuration by selecting the version "ELBSecurityPolicy-2014-10" below:-

Wednesday, October 8, 2014

GCE vs AWS Instance Pricing Charts

Straight out of RightScale blog link:-

On-Demand pricing comparison:-


1 yr Sustained use vs Reserved Instance pricing comparison:-


3 yr Sustained use vs Reserved Instance pricing comparison:-







Wednesday, October 1, 2014

Patching bash on old releases such as CentOS 5

You should be able to use the package manager to update the bash version. In some cases, if you want specific version of bash such as bash-3.2-33.el5_11.4, then you can download and instance using rpm

$wget http://mirror.centos.org/centos/5/updates/x86_64/RPMS/bash-3.2-33.el5_11.4.x86_64.rpm
$sudo rpm -ivh --force bash-3.2-33.el5_11.4.x86_64.rpm

Shellshock: Take two!.

The script mentioned in the redhat "diagnostic steps" is attached below and can be download from Redhat labs access page using login:-

*****************
#!/bin/bash
# shellshock-test.sh

VUNERABLE=false;
CVE20146271="$(env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test" 2>&1 )"

CVE20147169=$(cd /tmp 2>&1; rm -f /tmp/echo 2>&1; env 'x=() { (a)=>\' bash -c "echo uname" 2>&1; cat /tmp/echo 2>&1; rm -f /tmp/echo 2>&1 )

if [[ "$CVE20146271" =~ "vulnerable" ]]
then
    echo "This system is vulnerable to CVE-2014-6271 <https://access.redhat.com/security/cve/CVE-2014-6271>"
    VUNERABLE=true;
elif [[ "$CVE20146271" =~ "bash: error importing function definition for 'x'" ]]
then
    echo "This system does not have to most up to date fix for CVE-2014-6271 <https://access.redhat.com/security/cve/CVE-2014-6271>.  Please refer to 'https://access.redhat.com/articles/1200223' for more information"
else
echo "This system is safe from CVE-2014-6271 <https://access.redhat.com/security/cve/CVE-2014-6271>"
fi

if [[ "$CVE20147169" =~ "Linux" ]]
then
    echo "This system is vulnerable to CVE-2014-7169 <https://access.redhat.com/security/cve/CVE-2014-7169>"
    VUNERABLE=true;
else
echo "This system is safe from CVE-2014-7169 <https://access.redhat.com/security/cve/CVE-2014-7169>"
fi

if [[ "$VUNERABLE" = true ]]
then
echo "Please run 'yum update bash'.  If you are using satellite or custom repos you need to update the channel with the latest bash version first before running 'yum update bash'.  Please refer to 'https://access.redhat.com/articles/1200223' for more information"
fi

*****************

just save the above file and give +x permission on the script.

Problems with s3cmd utility

On one of the cloud instances, I was trying to upload a large file to a S3 bucket and I had to install s3cmd utility. Going with the approach of getting the latest and greatest did not serve well. I downloaded the 1.5.0 rc1 version from the link - s3cmd and unzipped it into a folder. Subsequently, I ran the install command from the installation director

$ sudo python setup.py install

the above seemed to work fine. However, when I tried to run "s3cmd --configure", I got the below error:-

***********
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    An unexpected error has occurred.
  Please try reproducing the error using
  the latest s3cmd code from the git master
  branch found at:
  If the error persists, please report the
  following lines (removing any private
  info as necessary) to:

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Traceback (most recent call last):
  File "/usr/bin/s3cmd", line 2527, in <module>
    report_exception(e)
  File "/usr/bin/s3cmd", line 2465, in report_exception
    s = u' '.join([unicodise(a) for a in sys.argv])
NameError: global name 'unicodise' is not defined

***********

On their github, it appears that other have run into the issue without a proper resolution. So later had to go back to the option of installing through package manager on RHEL instance to make it work.

$cd /etc/yum.repos.d/
$wget http://s3tools.org/repo/RHEL_6/s3tools.repo
$sudo yum install s3cmd
$s3cmd --configure


All 6 CVE's of bash "shellshock" vulnerabilities now seem to have been addressed

Pl. refer to Redhat article - link


Diagnostic Steps

Red Hat Access Labs has provided a script to help confirm if a system is patched against to the Shellshock vulnerability. You can also manually test your version of Bash by running the following command:
$ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"
If the output of the above command contains a line containing only the word vulnerable you are using a vulnerable version of Bash. The patch used to fix this issue ensures that no code is allowed after the end of a Bash function.
Note that different Bash versions will also print different warnings while executing the above command. The Bash versions without any fix produce the following output:
$ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"
vulnerable
bash: BASH_FUNC_x(): line 0: syntax error near unexpected token `)'
bash: BASH_FUNC_x(): line 0: `BASH_FUNC_x() () { :;}; echo vulnerable'
bash: error importing function definition for `BASH_FUNC_x'
test
The versions with only the original CVE-2014-6271 fix applied produce the following output:
$ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
bash: error importing function definition for `BASH_FUNC_x()'
test
The versions with additional fixes from RHSA-2014:1306RHSA-2014:1311 and RHSA-2014:1312 produce the following output:
$ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `BASH_FUNC_x'
test
The difference in the output is caused by additional function processing changes explained in the "How does this impact systems" section below.
The fix for CVE-2014-7169 ensures that the system is protected from the file creation issue. To test if your version of Bash is vulnerable to CVE-2014-7169, run the following command:
$ cd /tmp; rm -f /tmp/echo; env 'x=() { (a)=>\' bash -c "echo date"; cat /tmp/echo
bash: x: line 1: syntax error near unexpected token `='
bash: x: line 1: `'
bash: error importing function definition for `x'
Fri Sep 26 11:49:58 GMT 2014
If your system is vulnerable, the time and date information will be output on the screen and a file called /tmp/echo will be created.
If your system is not vulnerable, you will see output similar to:
$ cd /tmp; rm -f /tmp/echo; env 'x=() { (a)=>\' bash -c "echo date"; cat /tmp/echo
date
cat: /tmp/echo: No such file or directory
If your system is vulnerable, you can fix these issues by updating to the most recent version of the Bash package by running the following command:
# yum update bash

Friday, September 26, 2014

Jenkins service failed to startup as service

As part of the AWS instance reboot, on one of the machines jenkins service did not come up. The /var/log/jenkins/jenkins.log had the below permissions related error:-

**************
Sep 27, 2014 2:08:49 AM winstone.Logger logInternal
SEVERE: Container startup failed
java.io.FileNotFoundException: /var/cache/jenkins/war/META-INF/MANIFEST.MF (Permission denied)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at winstone.HostConfiguration.getWebRoot(HostConfiguration.java:277)
        at winstone.HostConfiguration.<init>(HostConfiguration.java:81)
        at winstone.HostGroup.initHost(HostGroup.java:66)
        at winstone.HostGroup.<init>(HostGroup.java:45)
        at winstone.Launcher.<init>(Launcher.java:143)
        at winstone.Launcher.main(Launcher.java:354)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at Main._main(Main.java:293)
        at Main.main(Main.java:98)
**************

On checking the /var/cache folder, the user and group owner was set to a different user than what jenkins was trying to run as in the /etc/init.d/jenkins script. So had to change the folder permissions using in /var/cache/jenkins folder

$sudo chown -R <jenkins_user><jenkins_group> /var/cache/jenkins

and then restart the jenkins service

$sudo service jenkins restart

Not sure why an Amazon scheduled maintenance reboot would take 6 hrs?

One of the instances that is part of the reboot just reported "In progress" status but I couldn't understand for the world why it would take 6 hrs as reported in the duration column of their console window below:-


AWS instance reboots

AWS have scheduled instance reboots over this weekend on around 10% of their EC2 instance fleet for upgrading their hardware. Needless to say that in impacts many of the "always-on" production machines. Amazon's Jeff Barr provided an update today on their blog:-

http://aws.amazon.com/blogs/aws/

There have also been several AWS users complaining on the forums about why a simple start/stop may not solve the problem:-

https://forums.aws.amazon.com/thread.jspa?threadID=161544&tstart=0

In the end, these machines will have to be rebooted, therefore always have a backup and recovery plan ready for all your "always-on" machines.



Perfect storm!. Shellshock bash vulnerability and AWS Instance reboot

As the saying goes - "when it rains it pours". We have had to deal with AWS instance reboots as well as patching the "shellshock" bash vulnerability (CVE-2014-6271) at the same time across many of our instances. The quick way to determine if your instances are vulnerable is to run the below command:-

$env var='() { ignore this;}; echo vulnerable' bash -c /bin/true

If the above prints "vulnerable" then you are exposed to bash vulnerability. You can also check the current version of bash installed by running the command below:-

$sudo rpm -q bash
bash-4.1.2-15.el6_4.x86_64

Once you have determined it is an old version, you can run an update through your package manager

$sudo yum update -y bash

Once the update finishes, you can check for the version again

$sudo rpm -q bash
bash.x86_64 0:4.1.2-15.el6_5.2

Now test for the vulnerability again by running the small bash script on top. This time it will not print "vulnerable"

Tuesday, September 23, 2014

Grants for locking and unlocking tables in MySQL RDS instances

Sometimes you will need lock tables privilege when trying to dump data from an RDS instances. To see if the current user has lock privileges, you can run the below command

$mysql> show grants for current_user\G;
*************************** 1. row ***************************
Grants for <rds_user>@%: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP,
RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER ON *.* TO '<rds_user>'@'%' IDENTIFIED BY PASSWORD 'XYZ' WITH GRANT OPTION

If you would like to grant all privileges for the RDS mysql root user, you can run the below query

$mysql>GRANT ALL PRIVILEGES ON `%`.* TO <rds_user>@'%' IDENTIFIED BY '<password>' WITH GRANT OPTION;

Now you can run the "show grants" command to see an additional row displayed

mysql> show grants for current_user\G;
*************************** 1. row ***************************
Grants for <rds_user>@%: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP,
RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER ON *.* TO '<rds_user>'@'%' IDENTIFIED BY PASSWORD 'XYZ' WITH GRANT OPTION
*************************** 2. row ***************************
Grants for <rds_user>@%: GRANT ALL PRIVILEGES ON `%`.* TO '<rds_user>'@'%'
 WITH GRANT OPTION
2 rows in set (0.00 sec)

Now to test if the locks are enabled, you can try the below queries

mysql> lock tables <db>.<table_nameA> READ;
Query OK, 0 rows affected (0.00 sec)

mysql> select count(*) from <db>.<table_nameA>;
+----------+
| count(*) |
+----------+
|   991225 |
+----------+
1 row in set (0.41 sec)

mysql> select count(*) from <db>.<table_nameB>;
ERROR 1100 (HY000): Table '<table_nameB>' was not locked with LOCK TABLES

To unlock the tables, you can run "unlock tables" command

mysql> unlock tables;
Query OK, 0 rows affected (0.00 sec)

In RDS instances the FILE privilege for MySQL is not applicable

Typically in MySQL instances that run on standalone EC2 boxes or local installs, you can set FILE privileges to write a SQL query to write the output into a local flat file such as

$mysql -u$MyUSER -p$MyPASS -h$MyHOST --port=$MyPORT --socket=$MySOCKET -e "select name, username, email, registerDate, lastvisitDate from TestDB.Employee where username not like 'TestUser' INTO OUTFILE '/tmp/dbout.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\n';"

In RDS instance, the above query won't work because we don't have access to local file system of RDS, so "/tmp/dbout.csv" cannot be created. You will see an error like

"ERROR 1045 (28000): Access denied for user '<rds_user'@'%' (using password:YES)"

Instead, you would have to run the query with "--execute" switch on the remote EC2 instance and dump the query results to a flat file by separating out the results using "sed" utility as documented on AWS developer forum - threadID=41443

$mysql -u$MyUSER -p$MyPASS -h$MyHOST --port=$MyPORT --socket=$MySOCKET -e "select name, username, email, registerDate, lastvisitDate from TestDB.Employee where username not like 'TestUser';" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g'  >/tmp/dbout.csv

Sunday, September 21, 2014

Finding a process that is consuming CPU excessively


  • Look at top output




  • Next check ps output to see the location on the file system where the process is running from 


$ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10
%CPU   PID USER     COMMAND
59.5 11286 ec2-user      /bin/sh /tmp/ismp002/1978898.tmp
59.1 22608 ec2-user      /bin/sh /tmp/ismp002/2326448.tmp
 5.8 22861 ec2-user     ./engine -Djmx_port=5555
 4.7 22865 ec2-user      ./engine --innerProcess


  • Run strace to see what calls are consuming the cpu cycles


$strace -c -p 11286
Process 11286 attached - interrupt to quit
Process 11286 detached
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 90.77    0.667227         273      2447           clone
  6.38    0.046881          10      4892      2446 wait4
  1.33    0.009747           0     22014     19568 stat
  1.02    0.007489           0     31802           rt_sigprocmask
  0.12    0.000864           0      4893           rt_sigaction
  0.11    0.000826           0      2446      2446 ioctl
  0.11    0.000819           0      2446           read
  0.09    0.000659           0      2446      2446 lseek
  0.08    0.000579           0      2446           rt_sigreturn
------ ----------- ----------- --------- --------- ----------------
100.00    0.735091                 75832     26906 total


  • In the above case the /tmp/ismp002/1978898.tmp file was not on the system, so it looks like a zombie process that was left running on the system
  • do a kill -9 on the unwanted processes



Saturday, September 20, 2014

Updating Route53 A records for a hosted zone using restricted IAM policy

If you have an application that is dynamically tearing down ELB's and creating new ELB's with instances in its pool, you will be in a situation where Route53 recordsets have be frequently updated as well. To manually update the A records of newly created ELB's can be tedious. Instead, you can create a restricted IAM user or group policy that allows for "ChangeResourceRecordSets" privilege such as below:-

**************
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "route53:GetHostedZone"
                "route53:ChangeResourceRecordSets"
            ],
            "Resource": "arn:aws:route53:::hostedzone/<ZONE_ID>"
        },
        {
            "Effect": "Allow",
            "Action": [
                "route53:GetHostedZone",
                "route53:ListResourceRecordSets"
            ],
            "Resource": "arn:aws:route53:::hostedzone/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "route53:GetChange"
            ],
            "Resource": "arn:aws:route53:::change/*"
        }
    ]
}
**************

Once the above policy is set, you can use AWS CLI as below to CREATE, DELETE, UPSERT A records on Route 53:-

***************
$aws route53 change-resource-record-sets --hosted-zone-id <ZONE_ID> --change-batch file://opt/sampleupsert.json --profile <domain>
{
    "ChangeInfo": {
        "Status": "PENDING",
        "Comment": "string",
        "SubmittedAt": "2014-09-20T12:40:49.159Z",
        "Id": "/change/CNZHKUS1ZF9Z9"
    }
}
***************

Once the change is submitted, you can query for the status till it shows "INSYNC" as below:-

***************
$aws route53 get-change --id /change/CNZHKUS1ZF9Z9 --profile <domain>
{
    "ChangeInfo": {
        "Status": "INSYNC",
        "Comment": "string",
        "SubmittedAt": "2014-09-20T12:40:49.159Z",
        "Id": "/change/CNZHKUS1ZF9Z9"
    }
}
***************

and the sampleupsert.json file which is passed as argument to --change-batch parameter in "change-resource-record-sets" method of AWS CLI, looks like

***************
{
  "Comment": "string",
  "Changes": [
    {
      "Action": "UPSERT",
      "ResourceRecordSet": {
        "Name": "<domain>",
        "Type": "A",
        "AliasTarget": {
          "HostedZoneId": "<ZONE_ID>",
          "DNSName": "<DNS_NAME of ELB>",
          "EvaluateTargetHealth": false
        }
      }
    }
  ]
}
***************

The above can also be done using Boto library and simple python code as below:-

***************
import boto.route53


conn = boto.route53.connect_to_region('us-east-1')
from boto.route53.record import ResourceRecordSets
zone_id = "<ZONE_ID>"
changes = ResourceRecordSets(conn, zone_id)
change = changes.add_change("UPSERT", '<domain>', "A")
change.set_alias("<ZONE_ID_2>", "<ELB A record>")
changes.commit()
***************

If ZONE_ID is not known, then you have to modify the above policy to allow listing of zones and then iterate through the code as shown in this blog - managing-amazon-route-53-dns-with-boto

Thursday, September 18, 2014

Domain delegation to Amazon Route 53

If you have domains registered by external domain name provider like Bluehost or Network Solutions. You can set up domain delegation to amazon Route53 service to handle sub domains more easily.

In Amazon Route 53 console, you can create a new hosted zone by clicking on "New Hosted Zone" and then specifying a name


Once you create the hosted zone, you will see NS records and SOA records as below:-



Next you will have to set up domain delegation in internal and external DNS server and add the NS records provided by Amazon Route 53 in the above screenshot. Once you have updated the DNS entries, you can run the below dig query to confirm that NS records match what has been provided by Amazon

$ dig -t NS example.mycompany.com

; <<>> DiG 9.7.1 <<>> -t NS example.mycompany.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32360
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 4

;; QUESTION SECTION:
;example.mycompany.com.              IN      NS

;; ANSWER SECTION:
example.mycompany.com.       0       IN      NS      ns-1467.awsdns-55.org.
example.mycompany.com.       0       IN      NS      ns-540.awsdns-03.net.
example.mycompany.com.       0       IN      NS      ns-1714.awsdns-22.co.uk.
example.mycompany.com.       0       IN      NS      ns-292.awsdns-36.com.
....

Tuesday, September 16, 2014

Monday, September 8, 2014

unix grep to search for multiple filter expressions

Recently, I was searching logs for a multiple string patterns. There are multiple ways to achieve that using "grep -E <string1>  grep -E <string2>". However, the below seemed more efficient

$ grep "08/Aug.*<service name>" access_log | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":00"}' | sort
 -n | uniq -c

Using wget to determine if ELB is misconfigured and is attached to a private subnet

Typically, you would want ELB's to be available in atleast 2 zones in a particular region, so that if one zone goes down, the ELB in the second zone will handle all the requests. If you ELB is configured correctly for multiple zones, you can do a "nslookup" on the ELB A record and you will get multiple EIP's returned (1 for each zone)

If the ELB is attached to a private subnet, you would see a request failure using wget:-

****************
$ wget http://<elb-name>.ap-northeast-1.elb.amazonaws.com/index.html
--2014-09-08 18:08:29--  http://<elb-name>.ap-northeast-1.elb.amazonaws.com/index.html
Resolving <elb-name>.ap-northeast-1.elb.amazonaws.com (<elb-name>.ap-northeast-1.elb.amazonaws.com)... 54.92.98.228, 54.238.149.12
Connecting to <elb-name>.ap-northeast-1.elb.amazonaws.com (<elb-name>.ap-northeast-1.elb.amazonaws.com)|54.92.98.228|:80... failed: Connection timed out.
Connecting to <elb-name>.ap-northeast-1.elb.amazonaws.com (<elb-name>.ap-northeast-1.elb.amazonaws.com)|54.238.149.12|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/xml]
Saving to: `index.html'
****************

The other way to confirm and check is through the AWS console:-


In the consile make sure the "subnet id" in the ELB's availability zone's have an igw-* associated with the zones that it has been added to. ELB's need to be public subnets so that they can be accessed from outside. 

Friday, August 29, 2014

Using netstat to grep pids and LISTEN ports

$sudo netstat -anp |grep LISTEN | grep -v "::"
tcp        0      0 0.0.0.0:38860               0.0.0.0:*                   LISTEN      1497/java
tcp        0      0 0.0.0.0:57583               0.0.0.0:*                   LISTEN      1497/java
tcp        0      0 0.0.0.0:2000                0.0.0.0:*                   LISTEN      1497/java
tcp        0      0 0.0.0.0:8080                0.0.0.0:*                   LISTEN      1497/java
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      1054/sshd
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      1161/master
tcp        0      0 0.0.0.0:57824               0.0.0.0:*                   LISTEN      1497/java
tcp        0      0 0.0.0.0:8000                0.0.0.0:*                   LISTEN      1497/java
tcp        0      0 127.0.0.1:8005              0.0.0.0:*                   LISTEN      1497/java
tcp        0      0 0.0.0.0:58535               0.0.0.0:*                   LISTEN      1497/java

Thursday, August 28, 2014

Changing font color in a linux system

In many of the linux ami's the default font color makes it hardly readable, so not sure why the default color set to "34" in the /etc/DIR_COLORS file. In order to change it you can do the following

$sudo vi /etc/DIR_COLORS
...
DIR 01;34       # directory
...

and you change it to cyan and it will improve the readability

...
DIR 01;36       # directory

After making the change, save and source the profile or exit out and re-login again to see the changes.

If you would like to set it to other colors, you may choose from

30=black
31=red
32=green
33=yellow
34=blue
35=magenta
36=cyan
37=white

Saturday, August 23, 2014

CloudBerry Explorer - UI alternative to s3cmd or AWS CLI commands

If you have storage on Amazon S3, then sooner or later you will needing a tool that can do entire folder backups to local disk or another cloud storage. Amazon S3 bucket allows you to download only specific files within the bucket at a time. CloudBerry Explorer has a free version and a pro version and I am happy to report that free does the job fairly well.

You would have to create a S3 bucket and create a IAM user with S3 "read-only" bucket policy like below:-

************
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": "*"
        }
    ]
}
************

Next download and install CloudBerry Explorer and then configure the S3 user that you just created in the IAM.


Once you have set up, you can downloads entire buckets or sync with other storage options

Monday, August 18, 2014

Use 'bcrypt' algorithm for hashing passwords

bcrypt is an adaptive password hashing algorithm rather than a symmetric encryption algorithm. It is much slower than MD5 or SHA-1, but more secure. It take a file input and when decrypting, it can output to stdout

**********
$ date; bcrypt -c -s12 test.txt;date
Mon Aug 18 23:04:36 PDT 2014
Encryption key:Key must be at least 8 characters
Encryption key:
Again:
Mon Aug 18 23:04:47 PDT 2014

$ date; bcrypt -o test.txt.bfe;date
Mon Aug 18 23:05:38 PDT 2014
Encryption key:
this is a sample text
Mon Aug 18 23:05:41 PDT 2014

***********

No Amazon SES service in Tokyo region

I recently had a request from a customer Japan to enable SES service in Amazon's Tokyo region. However, we found that it was not possible when we ran into the error below:-



Below is the response from AWS Support:-

"Yes, that is correct, SES is not currently supported in Tokyo Region. I do not have a roadmap for it to be available in Tokyo but I will ask the SES Engineering team to see if they can give an idea of when it will be available. We dont have another service such as SES that would keep there data in Japan so if the requirement is to keep data in Japan then another SMTP service such Sendgrid may have a Japanese region."
....
"I have received word from our SES engineering team that there are no current plans to include Tokyo region for SES. Apologies that we cannot be more of help with this regard."

C'mon AWS, if SendGrid can do it, you can!!.