Monday, December 21, 2015

brew update seems to break brew cask update

First I did a brew update, subsequently, I tried to update brew-cask and it threw the below error:-

********
==> Homebrew Origin:
https://github.com/Homebrew/homebrew
==> Homebrew-cask Version:
0.57.0
test-mac:~ test$ brew-cask update
==> Error: Could not link caskroom/cask manpages to:
==>   /usr/local/share/man/man1/brew-cask.1
==> 
==> Please delete these files and run `brew tap --repair`.
*******

The "list" of software on brew-cask was broken

*******
test-mac:~ test$ brew-cask list
Error: Cask 'sublime-text' definition is invalid: Bad header line: parse failed
*******

Basically, brew cask doctor and cleanup did not help and had to be reinstalled:-

*******
test-mac:~ test$ brew unlink brew-cask
Unlinking /usr/local/Cellar/brew-cask/0.57.0... 2 symlinks removed
test-mac:~ test$ brew install brew-cask
==> Installing brew-cask from caskroom/cask
==> Cloning https://github.com/caskroom/homebrew-cask.git
Updating /Library/Caches/Homebrew/brew-cask--git
==> Checking out tag v0.60.0
==> Caveats
You can uninstall this formula as `brew tap Caskroom/cask` is now all that's
needed to install Homebrew Cask and keep it up to date.
==> Summary
🍺  /usr/local/Cellar/brew-cask/0.60.0: 3 files, 12K, built in 78 seconds
test-mac:~ test$ brew-cask list
-bash: /usr/local/bin/brew-cask: No such file or directory
test-mac:~ test$ brew cask list
sublime-text virtualbox
test-mac:~ test$ brew cask cleanup
==> Removing dead symlinks
Nothing to do
==> Removing cached downloads
Nothing to do
test-mac:~ test$ brew cask update
Already up-to-date.
test-mac:~ test$ brew update
Already up-to-date.
*******

Friday, September 25, 2015

Upgrade to OSX 10.10 messed up the "Terminal"

Recently, an automatic patch update of OSX Yosemite 10.10 made my terminal window not understand any bash commands, e.g. 

*********
Last login: Fri Sep 25 18:46:40 on ttys000
test-mac:~ test$ ls
-bash: ls: command not found
test-mac:~ test$ vi
-bash: vi: command not found
test-mac:~ test$ 
*********

I tried the solution suggested in stackexchange, and that did the trick!:

$export PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin

Thursday, May 28, 2015

Mounting XFS RAID 10 volume over NFS

In certain situations you may want to share a RAID10 volme of NFS as a shared mount point across the instances in the VPC. You can follow the steps below

NFS Server instance:-

1. Install "nfs-utils" package

************
$sudo yum install -y nfs-utils
************

2. Add the below services at the instance boot up time

************
$sudo chkconfig --levels 345 nfs on
$sudo chkconfig --levels 345 nfslock on
$sudo chkconfig --levels 345 rpcbind on
************

3. Export the mounted volume to the machines in the VPC cidr block

************
$ cat /etc/exports
/mnt/md0    <VPC_CIDR>(rw)
************

4. Set the permissions for the mount point and also sub folders if any

************
$ ls -l
total 0
drwxrwxrwx 2 root root 69 May 28 06:22 md0
************

NOTE - I had give 777 as the permissions for /mnt/md0 folders. Without appropriate permissions, there will be a mount point error. For some reason 766 doesn't work as well.

5. Start the services

*************
$ sudo service rpcbind start
Starting rpcbind:                                          [  OK  ]
$ sudo service nfs start
Initializing kernel nfsd:                                  [  OK  ]
Starting NFS services:                                     [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
$ sudo service nfslock start
Starting NFS statd:                                        [  OK  ]
*************

6. Export the mounted RAID volume to all the instances in the VPC

*************
$ sudo exportfs -av
exporting <VPC_CIDR>:/mnt/md0
*************

7. Allow ingress rules on nfs server instance's security group for TCP and UDP ports 2049 and 111 for NFS and rpcbind

*************
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 2049 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol udp --port 2049 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 111 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol udp --port 111 --cidr <VPC_CIDR>
*************

NFS client instance:-

1. Install "nfs-utils" package

************
$sudo yum install -y nfs-utils
************

2. Create a mount point on the instance

************
$sudo mkdir /vol
************

2. Allow ingress rules for TCP and UDP ports for 2049 and 111 for nfs and rpcbind communication

*************
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 2049 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol udp --port 2049 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 111 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol udp --port 111 --cidr <VPC_CIDR>
*************

3. mount the nfs volume on the nfs client machine

*************
$sudo mount -t nfs <private ip of nfs server>:/mnt/md0 /vol
*************

4. Confirm the mounted raid volume shows available disk space

*************
$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            7.8G  1.1G  6.6G  15% /
devtmpfs              490M   56K  490M   1% /dev
tmpfs                 499M     0  499M   0% /dev/shm
<private ip>:/mnt/md0  3.0G   33M  3.0G   2% /vol
*************

5. Test by writing a file on the mounted nfs volume on the client instance

*************
$ sudo echo "this is a test" >> /vol/test.txt
*************

6. Also check the system logs using dmesg

*************
$ sudo dmesg |tail
[  360.660410] FS-Cache: Loaded
[  360.773794] RPC: Registered named UNIX socket transport module.
[  360.777793] RPC: Registered udp transport module.
[  360.779718] RPC: Registered tcp transport module.
[  360.781867] RPC: Registered tcp NFSv4.1 backchannel transport module.
[  360.845503] FS-Cache: Netfs 'nfs' registered for caching
[  443.240670] Key type dns_resolver registered
[  443.251609] NFS: Registering the id_resolver key type
[  443.253882] Key type id_resolver registered
[  443.255682] Key type id_legacy registered
*************



Tuesday, May 26, 2015

Creating XFS RAID10 on Amazon Linux

Typically AWS support recommends sticking with ext(x) based file system but for performance reasons you may want to create a RAID10 based on XFS file system. In order to create RAID10 based XFS, you can follow the steps below:-

1. Create a Amazon linux instance within a subnet in a VPC

************
$aws ec2 run-instances --image-id ami-1ecae776 --count 1 --instance-type t2.micro --key-name aminator --security-group-ids sg-7ad9a61e --subnet-id subnet-4d8df83a --associate-public-ip-address
************

2. Create ebs volumes. For RAID10, you will need 6 block storage devices created similar to the one shown below

************
$aws ec2 create-volume --size 1 --region us-east-1 --availability-zone us-east-1d --volume-type gp2
************
NOTE - The ebs volumes must be in the same region and availability zone as the instance. 

3.  Attach the created volumes to the instance as shown below

************
$aws ec2 attach-volume --volume-id vol-c33a982d --instance-id i-120d96c2 --device /dev/xvdb
************

4. Confirm that the devices have been attached successfully

************
$ ls -l /dev/sd*
lrwxrwxrwx 1 root root 4 May 27 00:44 /dev/sda -> xvda
lrwxrwxrwx 1 root root 5 May 27 00:44 /dev/sda1 -> xvda1
lrwxrwxrwx 1 root root 4 May 27 00:57 /dev/sdb -> xvdb
lrwxrwxrwx 1 root root 4 May 27 00:57 /dev/sdc -> xvdc
lrwxrwxrwx 1 root root 4 May 27 00:58 /dev/sdd -> xvdd
lrwxrwxrwx 1 root root 4 May 27 00:59 /dev/sde -> xvde
lrwxrwxrwx 1 root root 4 May 27 00:59 /dev/sdf -> xvdf
lrwxrwxrwx 1 root root 4 May 27 01:00 /dev/sdg -> xvdg
************

5. Check the block device I/O characteristics using "fdisk"

************
 $sudo fdisk -l /dev/xvdc

Disk /dev/xvdc: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
************

6. Create RAID10 using "mdadm" command

************
$sudo mdadm --create --verbose /dev/md0 --level=raid10 --raid-devices=6 /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde /dev/xvdf /dev/xvdg
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 1047552K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
************
NOTE - Incase, only striping or mirroring is required, then you can specify either "raid0" or "raid1" for the "level" parameter

7. Confirm that the raid10 has been created successfully

************
$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE   MOUNTPOINT
xvda    202:0    0   8G  0 disk
+-xvda1 202:1    0   8G  0 part   /
xvdb    202:16   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdc    202:32   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdd    202:48   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvde    202:64   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdf    202:80   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdg    202:96   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
************

8. Since Amazon linux does not come with mkfs.xfs program, you will have to install "xfsprogs" program from the package manager

************
$sudo yum install -y xfsprogs
$ ls -la /sbin/mkfs*
-rwxr-xr-x 1 root root   9496 Jul  9  2014 /sbin/mkfs
-rwxr-xr-x 1 root root  28808 Jul  9  2014 /sbin/mkfs.cramfs
-rwxr-xr-x 4 root root 103520 Feb 10 19:17 /sbin/mkfs.ext2
-rwxr-xr-x 4 root root 103520 Feb 10 19:17 /sbin/mkfs.ext3
-rwxr-xr-x 4 root root 103520 Feb 10 19:17 /sbin/mkfs.ext4
-rwxr-xr-x 1 root root 328632 Sep 12  2014 /sbin/mkfs.xfs
************

9. Create XFS file system on RAID10 volume

************
$ sudo mkfs.xfs /dev/md0
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md0               isize=256    agcount=8, agsize=98176 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0
data     =                       bsize=4096   blocks=785408, imaxpct=25
         =                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
************

10. Create a mount point to mount the raid device

************
$sudo mkdir /mnt/md0
************

11. Mount the raid volume to the mount point

************
$sudo mount -t xfs /dev/md0 /mnt/md0
************

12. Confirm the mount has been successful using "df" command

************
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  1.1G  6.6G  14% /
devtmpfs        490M   88K  490M   1% /dev
tmpfs           499M     0  499M   0% /dev/shm
/dev/md0        3.0G   33M  3.0G   2% /mnt/md0
************

13. Check the I/O characteristics of the RAID10 volume

************
$sudo fdisk -l /dev/md0

Disk /dev/md0: 3218 MB, 3218079744 bytes, 6285312 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
************

14. To mount the volume on system bootup, you can map it to /etc/fstab and add the volume.

************
$ sudo vi /etc/fstab 
$ cat /etc/fstab
#
LABEL=/     /           ext4    defaults,noatime  1   1
tmpfs       /dev/shm    tmpfs   defaults        0   0
devpts      /dev/pts    devpts  gid=5,mode=620  0   0
sysfs       /sys        sysfs   defaults        0   0
proc        /proc       proc    defaults        0   0
/dev/md0    /mnt/md0    xfs     defaults,nofail 0   2
************

15. Run "mount -a" to confirm that there are no errors in the fstab

************
$ sudo mount -a
************

You could follow similar set of steps for setting up an ext4 based raid volume as per AWS docs link below:-

Monday, May 25, 2015

Securing SSH logins using fail2ban

fail2ban utility helps with mitigation of brute force attacks on SSH logins by automatically blacklisting the ip address in the iptables rules based on filter conditions. In order to install fail2ban on your Amazon linux instances, you can follow the steps below

1. Install fail2ban through package manager

************
$sudo yum install -y fail2ban
************

2. configure fail2ban configuration

************
$sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

$sudo vi /etc/fail2ban/jail.local
...
# The DEFAULT allows a global definition of the options. They can be overridden
# in each jail afterwards.

[DEFAULT]

# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not
# ban a host which matches an address in this list. Several addresses can be
# defined using space separator.
ignoreip = 127.0.0.1/8 198.172.1.10/32

# "bantime" is the number of seconds that a host is banned.
bantime  = 1800
....
[ssh-iptables]

enabled  = true
filter   = sshd
action   = iptables[name=SSH, port=ssh, protocol=tcp]
           sendmail-whois[name=SSH, dest=user@mycompany.com, sender=fail2ban@example.com]
logpath  = /var/log/secure
maxretry = 5
************

3. Configure iptables with some basic rules

************
$sudo iptables -A INPUT -i lo -j ACCEPT
$sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
$sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT
$sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
$sudo iptables -A INPUT -j DROP
************

4. Start fail2ban service

************
$sudo service fail2ban start
************

5. Inspect iptables rules to make sure fail2ban ipchain has been added

************
$ sudo iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N fail2ban-SSH
-A INPUT -p tcp -m tcp --dport 22 -j fail2ban-SSH
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -j DROP
-A fail2ban-SSH -j RETURN
************

6. You will also receive an email from sendmail like below

************
From: Fail2Ban <fail2ban@example.com>
Date: Mon, May 25, 2015 at 10:58 AM
Subject: [Fail2Ban] SSH: started
To: admin@mycompany.com


Hi,

The jail SSH has been started successfully.

Regards,

Fail2Ban
************

7. Now try to SSH into the machine from an ip address that is not part of ignoreip rule and after 3 tries, you will notice a permission denied message and the below iptable rule added

***********
-A fail2ban-ssh -s 10.98.1.12/32 -j REJECT --reject-with icmp-port-unreachable
***********

To understand in detail how fail2ban protects linux servers you can refer to the link below:-

https://www.digitalocean.com/community/tutorials/how-fail2ban-works-to-protect-services-on-a-linux-server

Sunday, May 24, 2015

Open source OWASP scanner - Zed Attack Proxy (ZAP)

You can integrate OWASP scanner as part of your build pipeline to test your application for web related vulnerabilities. Zed Attack Proxy (ZAP) allows you to continuously monitor your applications for OWASP vulnerabilities and mitigate them.

In order to install ZAP proxy, you can follow the below steps on Amazon linux instance

1. Install open-jdk as part of the package manager update.

***************
$sudo yum install -y java-1.7.0-openjdk-devel-1.7.0.79-2.5.5.1.59
***************

2. Download and install ZAP proxy

***************
$wget http://downloads.sourceforge.net/project/zaproxy/2.4.0/ZAP_2.4.0_Linux.tar.gz
$tar xvfz ZAP_2.4.0_Linux.tar.gz
$sudo cp -Ra ZAP_2.4.0 /opt/zaproxy
***************

3. If you are running the proxy in a headless environment, then you will need to pass the below argument

**************
$sudo vi zap.sh
....
#Start ZAP

exec java ${JMEM} -Djava.awt.headless=true -XX:PermSize=256M -jar "${BASEDIR}/zap-2.4.0.jar" "$@"
**************

4. Increase Xmx value in zap.sh from 512m to 1024m

5. Download and install the ZAP API client

*************
$wget http://hivelocity.dl.sourceforge.net/project/zaproxy/client-api/zap-api-2.4-v1.jar
$ tar xvfz zap-api-2.4-v1.jar
*************

6. Run zap proxy in daemon and intercepting mode

*************
sudo ./zap.sh -daemon
*************

7. Execute a scan using the client API by sending a http query - https://code.google.com/p/zaproxy/wiki/ApiDetailsActions

*************
$ curl -vvv http://localhost:8080/JSON/spider/action/scan/?url=http%3A%2F%2Fwww.google.com%3A80%2Fbodgeit%2F
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /JSON/spider/action/scan/?url=http%3A%2F%2Fwww.google.com%3A80%2Fbodgeit%2
F HTTP/1.1
> User-Agent: curl/7.40.0
> Host: localhost:8080
> Accept: */*
>
< HTTP/1.1 200 OK
< Pragma: no-cache
< Cache-Control: no-cache
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: GET,POST,OPTIONS
< Access-Control-Allow-Headers: ZAP-Header
< X-Clacks-Overhead: GNU Terry Pratchett
< Content-Length: 12
< Content-Type: application/json; charset=UTF-8
*************

Thursday, May 21, 2015

Convenient and fast SSL scanner: SSLyze

SSLyze is a handy SSL scanner that can report some of the common SSL related vulnerabilities like weak ciphers or heartbleed related errors. The tool can be obtained from:-

https://github.com/nabla-c0d3/sslyze/releases

In order to run the tool, you can execute as below:-

**************
$ sslyze --regular <www.yoursite.com>:443

 AVAILABLE PLUGINS
 -----------------

  PluginSessionResumption
  PluginHeartbleed
  PluginCertInfo
  PluginChromeSha1Deprecation
  PluginCompression
  PluginSessionRenegotiation
  PluginOpenSSLCipherSuites
  PluginHSTS

 CHECKING HOST(S) AVAILABILITY
 -----------------------------

   www.yoursite.com:443 => <ip address>:443

 SCAN RESULTS FOR www.yoursite.com:443 - <ip address>:443
 --------------------------------------------------------------------------

  * Deflate Compression:
      OK - Compression disabled

  * Session Renegotiation:
      Client-initiated Renegotiations:   VULNERABLE - Server honors client-initiated renegotiations
      Secure Renegotiation:              OK - Supported

  * OpenSSL Heartbleed:
      OK - Not vulnerable to Heartbleed

  * Certificate - Content:
      SHA1 Fingerprint:                  d2675f5dd71b9d5c6331f1ab7e687e5122b437b
0
      Common Name:                       www.yoursite.com
      Issuer:                            DigiCert Secure Server CA
      Serial Number:                     05E67DF64B406133A40A5F810DC7E568
      Not Before:                        Jan 21 00:00:00 2014 GMT
      Not After:                         Jan 25 12:00:00 2016 GMT
      Signature Algorithm:               sha1WithRSAEncryption
      Public Key Algorithm:              rsaEncryption
      Key Size:                          2048 bit
      Exponent:                          65537 (0x10001)
      X509v3 Subject Alternative Name:   {'DNS': ['www.yoursite.com']}

  * Certificate - Trust:
      Hostname Validation:               OK - Subject Alternative Name matches
      Microsoft CA Store (08/2014):      OK - Certificate is trusted
      Java 6 CA Store (Update 65):       OK - Certificate is trusted
      Apple CA Store (OS X 10.9.4):      OK - Certificate is trusted
      Mozilla NSS CA Store (08/2014):    OK - Certificate is trusted
      Certificate Chain Received:        ['www.yoursite.com', 'DigiCert Secure Server CA']

  * Certificate - OCSP Stapling:
      NOT SUPPORTED - Server did not send back an OCSP response.

  * TLSV1_2 Cipher Suites:
      Server rejected all cipher suites.

  * SSLV2 Cipher Suites:
      Server rejected all cipher suites.

  * Session Resumption:
      With Session IDs:                  OK - Supported (5 successful, 0 failed,
 0 errors, 5 total attempts).
      With TLS Session Tickets:          OK - Supported

  * TLSV1_1 Cipher Suites:
      Server rejected all cipher suites.

  * TLSV1 Cipher Suites:
      Preferred:
                 AES256-SHA                    -              256 bits      HTTP
 200 OK
      Accepted:
                 AES256-SHA                    -              256 bits      HTTP
 200 OK
                 RC4-SHA                       -              128 bits      HTTP
 200 OK
                 RC4-MD5                       -              128 bits      HTTP
 200 OK
                 AES128-SHA                    -              128 bits      HTTP
 200 OK
                 DES-CBC3-SHA                  -              112 bits      HTTP
 200 OK

  * SSLV3 Cipher Suites:
      Preferred:
                 AES256-SHA                    -              256 bits      HTTP
 200 OK
      Accepted:
                 AES256-SHA                    -              256 bits      HTTP
 200 OK
                 RC4-SHA                       -              128 bits      HTTP
 200 OK
                 RC4-MD5                       -              128 bits      HTTP
 200 OK
                 AES128-SHA                    -              128 bits      HTTP
 200 OK
                 DES-CBC3-SHA                  -              112 bits      HTTP
 200 OK



 SCAN COMPLETED IN 16.14 S
 -------------------------
**************