Monday, December 21, 2015

brew update seems to break brew cask update

First I did a brew update, subsequently, I tried to update brew-cask and it threw the below error:-

********
==> Homebrew Origin:
https://github.com/Homebrew/homebrew
==> Homebrew-cask Version:
0.57.0
test-mac:~ test$ brew-cask update
==> Error: Could not link caskroom/cask manpages to:
==>   /usr/local/share/man/man1/brew-cask.1
==> 
==> Please delete these files and run `brew tap --repair`.
*******

The "list" of software on brew-cask was broken

*******
test-mac:~ test$ brew-cask list
Error: Cask 'sublime-text' definition is invalid: Bad header line: parse failed
*******

Basically, brew cask doctor and cleanup did not help and had to be reinstalled:-

*******
test-mac:~ test$ brew unlink brew-cask
Unlinking /usr/local/Cellar/brew-cask/0.57.0... 2 symlinks removed
test-mac:~ test$ brew install brew-cask
==> Installing brew-cask from caskroom/cask
==> Cloning https://github.com/caskroom/homebrew-cask.git
Updating /Library/Caches/Homebrew/brew-cask--git
==> Checking out tag v0.60.0
==> Caveats
You can uninstall this formula as `brew tap Caskroom/cask` is now all that's
needed to install Homebrew Cask and keep it up to date.
==> Summary
🍺  /usr/local/Cellar/brew-cask/0.60.0: 3 files, 12K, built in 78 seconds
test-mac:~ test$ brew-cask list
-bash: /usr/local/bin/brew-cask: No such file or directory
test-mac:~ test$ brew cask list
sublime-text virtualbox
test-mac:~ test$ brew cask cleanup
==> Removing dead symlinks
Nothing to do
==> Removing cached downloads
Nothing to do
test-mac:~ test$ brew cask update
Already up-to-date.
test-mac:~ test$ brew update
Already up-to-date.
*******

Friday, September 25, 2015

Upgrade to OSX 10.10 messed up the "Terminal"

Recently, an automatic patch update of OSX Yosemite 10.10 made my terminal window not understand any bash commands, e.g. 

*********
Last login: Fri Sep 25 18:46:40 on ttys000
test-mac:~ test$ ls
-bash: ls: command not found
test-mac:~ test$ vi
-bash: vi: command not found
test-mac:~ test$ 
*********

I tried the solution suggested in stackexchange, and that did the trick!:

$export PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin

Thursday, May 28, 2015

Mounting XFS RAID 10 volume over NFS

In certain situations you may want to share a RAID10 volme of NFS as a shared mount point across the instances in the VPC. You can follow the steps below

NFS Server instance:-

1. Install "nfs-utils" package

************
$sudo yum install -y nfs-utils
************

2. Add the below services at the instance boot up time

************
$sudo chkconfig --levels 345 nfs on
$sudo chkconfig --levels 345 nfslock on
$sudo chkconfig --levels 345 rpcbind on
************

3. Export the mounted volume to the machines in the VPC cidr block

************
$ cat /etc/exports
/mnt/md0    <VPC_CIDR>(rw)
************

4. Set the permissions for the mount point and also sub folders if any

************
$ ls -l
total 0
drwxrwxrwx 2 root root 69 May 28 06:22 md0
************

NOTE - I had give 777 as the permissions for /mnt/md0 folders. Without appropriate permissions, there will be a mount point error. For some reason 766 doesn't work as well.

5. Start the services

*************
$ sudo service rpcbind start
Starting rpcbind:                                          [  OK  ]
$ sudo service nfs start
Initializing kernel nfsd:                                  [  OK  ]
Starting NFS services:                                     [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
$ sudo service nfslock start
Starting NFS statd:                                        [  OK  ]
*************

6. Export the mounted RAID volume to all the instances in the VPC

*************
$ sudo exportfs -av
exporting <VPC_CIDR>:/mnt/md0
*************

7. Allow ingress rules on nfs server instance's security group for TCP and UDP ports 2049 and 111 for NFS and rpcbind

*************
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 2049 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol udp --port 2049 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 111 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol udp --port 111 --cidr <VPC_CIDR>
*************

NFS client instance:-

1. Install "nfs-utils" package

************
$sudo yum install -y nfs-utils
************

2. Create a mount point on the instance

************
$sudo mkdir /vol
************

2. Allow ingress rules for TCP and UDP ports for 2049 and 111 for nfs and rpcbind communication

*************
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 2049 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol udp --port 2049 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 111 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol udp --port 111 --cidr <VPC_CIDR>
*************

3. mount the nfs volume on the nfs client machine

*************
$sudo mount -t nfs <private ip of nfs server>:/mnt/md0 /vol
*************

4. Confirm the mounted raid volume shows available disk space

*************
$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            7.8G  1.1G  6.6G  15% /
devtmpfs              490M   56K  490M   1% /dev
tmpfs                 499M     0  499M   0% /dev/shm
<private ip>:/mnt/md0  3.0G   33M  3.0G   2% /vol
*************

5. Test by writing a file on the mounted nfs volume on the client instance

*************
$ sudo echo "this is a test" >> /vol/test.txt
*************

6. Also check the system logs using dmesg

*************
$ sudo dmesg |tail
[  360.660410] FS-Cache: Loaded
[  360.773794] RPC: Registered named UNIX socket transport module.
[  360.777793] RPC: Registered udp transport module.
[  360.779718] RPC: Registered tcp transport module.
[  360.781867] RPC: Registered tcp NFSv4.1 backchannel transport module.
[  360.845503] FS-Cache: Netfs 'nfs' registered for caching
[  443.240670] Key type dns_resolver registered
[  443.251609] NFS: Registering the id_resolver key type
[  443.253882] Key type id_resolver registered
[  443.255682] Key type id_legacy registered
*************



Tuesday, May 26, 2015

Creating XFS RAID10 on Amazon Linux

Typically AWS support recommends sticking with ext(x) based file system but for performance reasons you may want to create a RAID10 based on XFS file system. In order to create RAID10 based XFS, you can follow the steps below:-

1. Create a Amazon linux instance within a subnet in a VPC

************
$aws ec2 run-instances --image-id ami-1ecae776 --count 1 --instance-type t2.micro --key-name aminator --security-group-ids sg-7ad9a61e --subnet-id subnet-4d8df83a --associate-public-ip-address
************

2. Create ebs volumes. For RAID10, you will need 6 block storage devices created similar to the one shown below

************
$aws ec2 create-volume --size 1 --region us-east-1 --availability-zone us-east-1d --volume-type gp2
************
NOTE - The ebs volumes must be in the same region and availability zone as the instance. 

3.  Attach the created volumes to the instance as shown below

************
$aws ec2 attach-volume --volume-id vol-c33a982d --instance-id i-120d96c2 --device /dev/xvdb
************

4. Confirm that the devices have been attached successfully

************
$ ls -l /dev/sd*
lrwxrwxrwx 1 root root 4 May 27 00:44 /dev/sda -> xvda
lrwxrwxrwx 1 root root 5 May 27 00:44 /dev/sda1 -> xvda1
lrwxrwxrwx 1 root root 4 May 27 00:57 /dev/sdb -> xvdb
lrwxrwxrwx 1 root root 4 May 27 00:57 /dev/sdc -> xvdc
lrwxrwxrwx 1 root root 4 May 27 00:58 /dev/sdd -> xvdd
lrwxrwxrwx 1 root root 4 May 27 00:59 /dev/sde -> xvde
lrwxrwxrwx 1 root root 4 May 27 00:59 /dev/sdf -> xvdf
lrwxrwxrwx 1 root root 4 May 27 01:00 /dev/sdg -> xvdg
************

5. Check the block device I/O characteristics using "fdisk"

************
 $sudo fdisk -l /dev/xvdc

Disk /dev/xvdc: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
************

6. Create RAID10 using "mdadm" command

************
$sudo mdadm --create --verbose /dev/md0 --level=raid10 --raid-devices=6 /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde /dev/xvdf /dev/xvdg
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 1047552K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
************
NOTE - Incase, only striping or mirroring is required, then you can specify either "raid0" or "raid1" for the "level" parameter

7. Confirm that the raid10 has been created successfully

************
$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE   MOUNTPOINT
xvda    202:0    0   8G  0 disk
+-xvda1 202:1    0   8G  0 part   /
xvdb    202:16   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdc    202:32   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdd    202:48   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvde    202:64   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdf    202:80   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdg    202:96   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
************

8. Since Amazon linux does not come with mkfs.xfs program, you will have to install "xfsprogs" program from the package manager

************
$sudo yum install -y xfsprogs
$ ls -la /sbin/mkfs*
-rwxr-xr-x 1 root root   9496 Jul  9  2014 /sbin/mkfs
-rwxr-xr-x 1 root root  28808 Jul  9  2014 /sbin/mkfs.cramfs
-rwxr-xr-x 4 root root 103520 Feb 10 19:17 /sbin/mkfs.ext2
-rwxr-xr-x 4 root root 103520 Feb 10 19:17 /sbin/mkfs.ext3
-rwxr-xr-x 4 root root 103520 Feb 10 19:17 /sbin/mkfs.ext4
-rwxr-xr-x 1 root root 328632 Sep 12  2014 /sbin/mkfs.xfs
************

9. Create XFS file system on RAID10 volume

************
$ sudo mkfs.xfs /dev/md0
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md0               isize=256    agcount=8, agsize=98176 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0
data     =                       bsize=4096   blocks=785408, imaxpct=25
         =                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
************

10. Create a mount point to mount the raid device

************
$sudo mkdir /mnt/md0
************

11. Mount the raid volume to the mount point

************
$sudo mount -t xfs /dev/md0 /mnt/md0
************

12. Confirm the mount has been successful using "df" command

************
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  1.1G  6.6G  14% /
devtmpfs        490M   88K  490M   1% /dev
tmpfs           499M     0  499M   0% /dev/shm
/dev/md0        3.0G   33M  3.0G   2% /mnt/md0
************

13. Check the I/O characteristics of the RAID10 volume

************
$sudo fdisk -l /dev/md0

Disk /dev/md0: 3218 MB, 3218079744 bytes, 6285312 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
************

14. To mount the volume on system bootup, you can map it to /etc/fstab and add the volume.

************
$ sudo vi /etc/fstab 
$ cat /etc/fstab
#
LABEL=/     /           ext4    defaults,noatime  1   1
tmpfs       /dev/shm    tmpfs   defaults        0   0
devpts      /dev/pts    devpts  gid=5,mode=620  0   0
sysfs       /sys        sysfs   defaults        0   0
proc        /proc       proc    defaults        0   0
/dev/md0    /mnt/md0    xfs     defaults,nofail 0   2
************

15. Run "mount -a" to confirm that there are no errors in the fstab

************
$ sudo mount -a
************

You could follow similar set of steps for setting up an ext4 based raid volume as per AWS docs link below:-

Monday, May 25, 2015

Securing SSH logins using fail2ban

fail2ban utility helps with mitigation of brute force attacks on SSH logins by automatically blacklisting the ip address in the iptables rules based on filter conditions. In order to install fail2ban on your Amazon linux instances, you can follow the steps below

1. Install fail2ban through package manager

************
$sudo yum install -y fail2ban
************

2. configure fail2ban configuration

************
$sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

$sudo vi /etc/fail2ban/jail.local
...
# The DEFAULT allows a global definition of the options. They can be overridden
# in each jail afterwards.

[DEFAULT]

# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not
# ban a host which matches an address in this list. Several addresses can be
# defined using space separator.
ignoreip = 127.0.0.1/8 198.172.1.10/32

# "bantime" is the number of seconds that a host is banned.
bantime  = 1800
....
[ssh-iptables]

enabled  = true
filter   = sshd
action   = iptables[name=SSH, port=ssh, protocol=tcp]
           sendmail-whois[name=SSH, dest=user@mycompany.com, sender=fail2ban@example.com]
logpath  = /var/log/secure
maxretry = 5
************

3. Configure iptables with some basic rules

************
$sudo iptables -A INPUT -i lo -j ACCEPT
$sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
$sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT
$sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
$sudo iptables -A INPUT -j DROP
************

4. Start fail2ban service

************
$sudo service fail2ban start
************

5. Inspect iptables rules to make sure fail2ban ipchain has been added

************
$ sudo iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N fail2ban-SSH
-A INPUT -p tcp -m tcp --dport 22 -j fail2ban-SSH
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -j DROP
-A fail2ban-SSH -j RETURN
************

6. You will also receive an email from sendmail like below

************
From: Fail2Ban <fail2ban@example.com>
Date: Mon, May 25, 2015 at 10:58 AM
Subject: [Fail2Ban] SSH: started
To: admin@mycompany.com


Hi,

The jail SSH has been started successfully.

Regards,

Fail2Ban
************

7. Now try to SSH into the machine from an ip address that is not part of ignoreip rule and after 3 tries, you will notice a permission denied message and the below iptable rule added

***********
-A fail2ban-ssh -s 10.98.1.12/32 -j REJECT --reject-with icmp-port-unreachable
***********

To understand in detail how fail2ban protects linux servers you can refer to the link below:-

https://www.digitalocean.com/community/tutorials/how-fail2ban-works-to-protect-services-on-a-linux-server

Sunday, May 24, 2015

Open source OWASP scanner - Zed Attack Proxy (ZAP)

You can integrate OWASP scanner as part of your build pipeline to test your application for web related vulnerabilities. Zed Attack Proxy (ZAP) allows you to continuously monitor your applications for OWASP vulnerabilities and mitigate them.

In order to install ZAP proxy, you can follow the below steps on Amazon linux instance

1. Install open-jdk as part of the package manager update.

***************
$sudo yum install -y java-1.7.0-openjdk-devel-1.7.0.79-2.5.5.1.59
***************

2. Download and install ZAP proxy

***************
$wget http://downloads.sourceforge.net/project/zaproxy/2.4.0/ZAP_2.4.0_Linux.tar.gz
$tar xvfz ZAP_2.4.0_Linux.tar.gz
$sudo cp -Ra ZAP_2.4.0 /opt/zaproxy
***************

3. If you are running the proxy in a headless environment, then you will need to pass the below argument

**************
$sudo vi zap.sh
....
#Start ZAP

exec java ${JMEM} -Djava.awt.headless=true -XX:PermSize=256M -jar "${BASEDIR}/zap-2.4.0.jar" "$@"
**************

4. Increase Xmx value in zap.sh from 512m to 1024m

5. Download and install the ZAP API client

*************
$wget http://hivelocity.dl.sourceforge.net/project/zaproxy/client-api/zap-api-2.4-v1.jar
$ tar xvfz zap-api-2.4-v1.jar
*************

6. Run zap proxy in daemon and intercepting mode

*************
sudo ./zap.sh -daemon
*************

7. Execute a scan using the client API by sending a http query - https://code.google.com/p/zaproxy/wiki/ApiDetailsActions

*************
$ curl -vvv http://localhost:8080/JSON/spider/action/scan/?url=http%3A%2F%2Fwww.google.com%3A80%2Fbodgeit%2F
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /JSON/spider/action/scan/?url=http%3A%2F%2Fwww.google.com%3A80%2Fbodgeit%2
F HTTP/1.1
> User-Agent: curl/7.40.0
> Host: localhost:8080
> Accept: */*
>
< HTTP/1.1 200 OK
< Pragma: no-cache
< Cache-Control: no-cache
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: GET,POST,OPTIONS
< Access-Control-Allow-Headers: ZAP-Header
< X-Clacks-Overhead: GNU Terry Pratchett
< Content-Length: 12
< Content-Type: application/json; charset=UTF-8
*************

Thursday, May 21, 2015

Convenient and fast SSL scanner: SSLyze

SSLyze is a handy SSL scanner that can report some of the common SSL related vulnerabilities like weak ciphers or heartbleed related errors. The tool can be obtained from:-

https://github.com/nabla-c0d3/sslyze/releases

In order to run the tool, you can execute as below:-

**************
$ sslyze --regular <www.yoursite.com>:443

 AVAILABLE PLUGINS
 -----------------

  PluginSessionResumption
  PluginHeartbleed
  PluginCertInfo
  PluginChromeSha1Deprecation
  PluginCompression
  PluginSessionRenegotiation
  PluginOpenSSLCipherSuites
  PluginHSTS

 CHECKING HOST(S) AVAILABILITY
 -----------------------------

   www.yoursite.com:443 => <ip address>:443

 SCAN RESULTS FOR www.yoursite.com:443 - <ip address>:443
 --------------------------------------------------------------------------

  * Deflate Compression:
      OK - Compression disabled

  * Session Renegotiation:
      Client-initiated Renegotiations:   VULNERABLE - Server honors client-initiated renegotiations
      Secure Renegotiation:              OK - Supported

  * OpenSSL Heartbleed:
      OK - Not vulnerable to Heartbleed

  * Certificate - Content:
      SHA1 Fingerprint:                  d2675f5dd71b9d5c6331f1ab7e687e5122b437b
0
      Common Name:                       www.yoursite.com
      Issuer:                            DigiCert Secure Server CA
      Serial Number:                     05E67DF64B406133A40A5F810DC7E568
      Not Before:                        Jan 21 00:00:00 2014 GMT
      Not After:                         Jan 25 12:00:00 2016 GMT
      Signature Algorithm:               sha1WithRSAEncryption
      Public Key Algorithm:              rsaEncryption
      Key Size:                          2048 bit
      Exponent:                          65537 (0x10001)
      X509v3 Subject Alternative Name:   {'DNS': ['www.yoursite.com']}

  * Certificate - Trust:
      Hostname Validation:               OK - Subject Alternative Name matches
      Microsoft CA Store (08/2014):      OK - Certificate is trusted
      Java 6 CA Store (Update 65):       OK - Certificate is trusted
      Apple CA Store (OS X 10.9.4):      OK - Certificate is trusted
      Mozilla NSS CA Store (08/2014):    OK - Certificate is trusted
      Certificate Chain Received:        ['www.yoursite.com', 'DigiCert Secure Server CA']

  * Certificate - OCSP Stapling:
      NOT SUPPORTED - Server did not send back an OCSP response.

  * TLSV1_2 Cipher Suites:
      Server rejected all cipher suites.

  * SSLV2 Cipher Suites:
      Server rejected all cipher suites.

  * Session Resumption:
      With Session IDs:                  OK - Supported (5 successful, 0 failed,
 0 errors, 5 total attempts).
      With TLS Session Tickets:          OK - Supported

  * TLSV1_1 Cipher Suites:
      Server rejected all cipher suites.

  * TLSV1 Cipher Suites:
      Preferred:
                 AES256-SHA                    -              256 bits      HTTP
 200 OK
      Accepted:
                 AES256-SHA                    -              256 bits      HTTP
 200 OK
                 RC4-SHA                       -              128 bits      HTTP
 200 OK
                 RC4-MD5                       -              128 bits      HTTP
 200 OK
                 AES128-SHA                    -              128 bits      HTTP
 200 OK
                 DES-CBC3-SHA                  -              112 bits      HTTP
 200 OK

  * SSLV3 Cipher Suites:
      Preferred:
                 AES256-SHA                    -              256 bits      HTTP
 200 OK
      Accepted:
                 AES256-SHA                    -              256 bits      HTTP
 200 OK
                 RC4-SHA                       -              128 bits      HTTP
 200 OK
                 RC4-MD5                       -              128 bits      HTTP
 200 OK
                 AES128-SHA                    -              128 bits      HTTP
 200 OK
                 DES-CBC3-SHA                  -              112 bits      HTTP
 200 OK



 SCAN COMPLETED IN 16.14 S
 -------------------------
**************

Tuesday, May 19, 2015

Raw packet capture using ngrep as alternative to tcpdump

There are many packet capture tools available where you want to do a quick raw capture. Though tcpdump is quite popular, other tools like ngrep could be handy. To download ngrep, you can get the source code from

http://ngrep.sourceforge.net/

***********
$wget -O http://tcpdiag.dl.sourceforge.net/project/ngrep/ngrep/1.45/ngrep-1.45.tar.bz2
***********

To install from source, follow the steps below:

***********
1. sudo yum install -y libpcap-devel
2. cd /ngrep-1.45
3. ./configure 
4. make
5. sudo make install
***********

NOTE - In the "configure" step, there could be errors like

***********
more than one set found in:
/usr/include
/usr/include/pcap

please wipe out all unused pcap installations

If you get the above error, you can use the workaround documented at


pcap-bpf.h, pcap.h, pcap-namedb.h can temporarily be moved from /usr/include 


***********

The ngrep binary is placed under /usr/local/bin and you can run the ngrep utility like below:-

***********
$sudo ./ngrep -q -d eth0 -W byline host www.google.com and port 80
interface: eth0 (198.x.x.x/255.255.255.240)
filter: (ip) and ( host www.google.com and port 80 )

T 198.x.x.x:39924 -> 216.58.217.132:80 [AP]
GET / HTTP/1.1.
User-Agent: curl/7.40.0.
Host: www.google.com.
Accept: */*.
.


T 216.58.217.132:80 -> 198.x.x.x:39924 [A]
HTTP/1.1 200 OK.
Date: Tue, 19 May 2015 20:35:24 GMT.
Expires: -1.
Cache-Control: private, max-age=0.
Content-Type: text/html; charset=ISO-8859-1.
Set-Cookie: PREF=ID=376ff1b1d2b8ea9a:FF=0:TM=1432067724:LM=1432067724:S=KjiPBYS3
DtDK-mjr; expires=Thu, 18-May-2017 20:35:24 GMT; path=/; domain=.google.com.
Set-Cookie: NID=67=jb9NrkGR-kfzXjPKDJ9cYemjLXpDBALNIY0Wuq3bTT4w2vaEeNkDwIYQf2zKw
x3nUlBBaoWj81TGWswY2-PzDFfagMaBnFn-d9uI8hHbyfMa3g8e38iSTsnyXY8I-SNbcwOKiRlkWC5Y9
phHHCGTunI4mVo; expires=Wed, 18-Nov-2015 20:35:24 GMT; path=/; domain=.google.co
m; HttpOnly.
P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bi
n/answer.py?hl=en&answer=151657 for more info.".
Server: gws.
X-XSS-Protection: 1; mode=block.
X-Frame-Options: SAMEORIGIN.
Alternate-Protocol: 80:quic,p=1.
Accept-Ranges: none.
Vary: Accept-Encoding.
Transfer-Encoding: chunked.
.
45c1.
<!doctype html>...
***********




Monday, May 18, 2015

Quick way to confirm whether the private key matches that of X509 certificate using Openssl

Typically, the modulus of the private and public keys should match and a hash of their values would make for easy comparison

https://kb.wisc.edu/middleware/page.php?id=4064

*************
$ openssl x509 -noout -modulus -in server.crt | openssl md5
$ openssl rsa -noout -modulus -in server.key | openssl md5
************

Sunday, May 17, 2015

Installing and configuring s3fs, FUSE over Amazon S3 bucket

For certain applications, you may want to mount s3 bucket directly over FUSE (file system in user space). There are some limitations with this model, one of them is the max file size is 64g (imposed by s3fs and not amazon s3). s3fs also documents the below:-

https://code.google.com/p/s3fs/wiki/FuseOverAmazon

"Due  to  S3's "eventual consistency" limitations, file creation can and will occasionally fail. Even  after  a  successful  create,  subsequent reads  can  fail for an indeterminate time, even after one or more successful reads. Create and read enough files  and  you  will  eventually encounter  this failure. This is not a flaw in s3fs and it is not something a FUSE wrapper like s3fs can work around. The retries option does not  address  this issue. Your application must either tolerate or compensate for these failures, for example by retrying creates or reads."

To install you can follow the steps below:-

1. Download the source

*************
$wget https://s3fs.googlecode.com/files/s3fs-1.74.tar.gz
*************

2. Install the necessary dependent libraries

*************
$sudo yum install gcc-c++ fuse-devel libxml2-devel libcurl-devel openssl-devel
*************

3. Compile and install s3fs

*************
$cd s3fs-1.74
$./configure --prefix=/usr
$sudo make
$sudo make install
*************

4. Confirm that the library got installed correctly

*************
$ grep s3 /etc/mtab
s3fs /vol fuse.s3fs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
$ s3fs --version
Amazon Simple Storage Service File System 1.74
Copyright (C) 2010 Randy Rizun <rrizun@gmail.com>
License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
*************

5. Mount the s3 bucket over FUSE as a mount point

*************
$mkdir <mount_point>
$sudo /usr/bin/s3fs -d -o allow_other <bucket_name> <mount_point>
*************
NOTE -

a. You can also enable allow_other via /etc/fuse.conf

*************
$ cat /etc/fuse.conf
# mount_max = 1000
user_allow_other (default is commented out)
*************

b. The -d command line parameter enables s3fs to write debug output to /var/log/messages as below

*************
May 17 23:03:04 ip-198-x-x-x dhclient[1874]: bound to 198.x.x.x -- renewal in 1381 seconds.
May 17 23:24:06 ip-198-x-x-x kernel: [187835.012553] fuse init (API version 7.22)
May 17 23:24:06 ip-198-x-x-x s3fs: init $Rev: 497 $
May 17 23:26:05 ip-198-x-x-x dhclient[1874]: DHCPREQUEST on eth0 to 198.x.x.x port 67 (xid=0x138279e0)
May 17 23:26:05 ip-198-x-x-x dhclient[1874]: DHCPACK from 198.x.x.x (xid=0x138279e0)
May 17 23:26:07 ip-198-x-x-x dhclient[1874]: bound to 198.x.x.x -- renewal in 1705 seconds.
May 17 23:34:13 ip-198-x-x-x s3fs: init $Rev: 497 $
May 17 23:47:32 ip-198-x-x-x s3fs: init $Rev: 497 $
May 17 23:48:21 ip-198-x-x-x s3fs: Body Text:
May 17 23:48:21 ip-198-x-x-x s3fs: Body Text:
May 17 23:48:21 ip-198-x-x-x s3fs: Body Text:
*************

c. If the s3 bucket is not mounted correctly, you will see an error like below:-

*************
$sudo cp test.txt <mount_point>
cp: failed to access ‘<mount_point>’: Transport endpoint is not connected
*************

6. In case you want to unmount the s3 bucket, you can use the below command

*************
$sudo fusermount -u <mount_point>
*************

7. Now that s3fs has been set up correctly on the file system, you will have to set up the IAM user and bucket policy correctly

*************
IAM user policy with managed policy set to full s3 access

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": "*"
    }
  ]
}

S3 bucket policy that allows all actions to the specific IAM user

{
  "Id": "Policy1431904706700",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1431904701345",
      "Action": "s3:*",
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::<bucket_name>/*",
      "Principal": {
        "AWS": [
          "arn:aws:iam::<aws_account_id>:user/<s3_user>"
        ]
      }
    }
  ]

*************

8. Now we can test by creating test files on the mounted volume

*************
$cd <mount_point>
$echo "this is the first test" >>test1.txt
$echo "this is the second test" >>test2.txt
$echo "this is the third test" >>test3.txt
*************

Now you will find the above files in your s3 bucket. If you would like to capture the packets you can use

*************
$sudo tcpdump -i eth0 -s 1500 -A port not 22 -n and net 54 >> info.txt
*************
NOTE - If tcpdump is run on s3.amazonaws.com then traffic is not seen. Here is the response from AWS support - "S3fs is sending traffic to S3 endpoints that have reverse DNS addresses that end in 'amazonaws.com', so in theory, tcpdump should allow you to filter on the hostname "amazonaws.com". But every time you try to use that filter, it doesn't show any traffic going to S3. In order to dump all traffic from eth0, we can filter out all traffic on port 22 (as we don't want to watch traffic from our own SSH session) and filter by IP address. As S3 endpoint IP addresses will be different, depending on your location, it may not make sense to filter by the entire IP address, but as the first octet will most likely always start with '54', so a command like this should give you the traffic."

In the s3 bucket, logs folder will have actual logs for the file that was put on s3 bucket

*************
82548f8fcda98eb96f29149b0cf3b8f4083f18b432adee0f38a9c4c52bc9b7cf <bucket_name>
[17/May/2015:23:48:22 +0000] 54.84.186.187 arn:aws:iam::<aws_account_id>:user/<IAM user>
DCEF0FA83504B586 REST.PUT.OBJECT test2.txt "PUT /test2.txt HTTP/1.1" 200 - - 24 27
5 "-" "-" -
*************

Friday, May 15, 2015

Enabling multi-factor authentication (MFA) in Amazon EC2 linux instance using TOTP codes and Google Authenticator

For jump server boxes, you will want to enable multi-factor authentication using keys and time based one time password (TOTP - https://tools.ietf.org/html/rfc6238). You can either use google authenticator (which recently became closed source) or FreeOTP (from RedHat). You can follow the steps below:-

1. Install the dependent libraries

*************
$sudo yum install gcc autoconf automake libtool pam-devel
*************

2. Next you can decide whether to install google authenticator from source or from the package repository (I recommend the pkg repository)

*************
$sudo yum install -y google-authenticator
*************
NOTE - If you are installing from source, you have to follow the below steps
*************
$git clone https://github.com/google/google-authenticator.git
$cd /home/ec2-user/google-authenticator/libpam
$./bootstrap.sh
$./configure
$make
$sudo make install

NOTE - The result of "make install" adds the pam_google_authenticator to /usr/lib/security folder, but you will need to copy this file to /lib64/security/.. folder
*************

3. After you have successfully installed, you should find the google-authenticator shared object library in the below location

*************
$ sudo find ./* -name pam_google_authenticator.so
./lib64/security/pam_google_authenticator.so
*************

4. You can then create the OTP key

*************
$google-authenticator

Do you want authentication tokens to be time-based (y/n) y

Your new secret key is: xyz
Your verification code is xyz
Your emergency scratch codes are:
  690-----
  285-----
  670-----
  629-----
  538-----

Do you want me to update your "/home/ec2-user/.google_authenticator" file (y/n) y

Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) 

Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y

By default, tokens are good for 30 seconds and in order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with poor
time synchronization, you can increase the window from its default
size of 1:30min to about 4min. Do you want to do so (y/n) y

If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y
*************

NOTE - Enter the "Secret key" generated above in your google-authenticator mobile app or desktop app to generate the time based codes.

5. Add the below line at the top of /etc/pam.d/sshd file

*************
$sudo vi /etc/pam.d/sshd
auth       required     pam_google_authenticator.so
*************

6. Comment out the password authentication line in /etc/pam.d/sshd file

*************
$sudo vi /etc/pam.d/sshd
#auth       substack     password-auth
*************

7. Add the below SSH properties to /etc/ssh/sshd_config file

*************
$sudo vi /etc/ssh/sshd_config
PubkeyAuthentication yes
PasswordAuthentication no
PermitEmptyPasswords no
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive
*************
NOTE - The "AuthenticationMethods" directive for multiple forms of identity may work only starting with OpenSSH_6.2 onwards

8. Restart the SSH daemon

*************
$sudo service sshd restart

or 

$sudo systemctl restart sshd.service
*************

Now you should be asked for a verification code upon login:-

************
$ssh -i testmfa.pem ec2-user@54.x.y.z
Authenticated with partial success.
Verification code:
Last login: Sat May 16 05:45:25 2015 from ....
************

Giving temporary access to a EC2 instance without sharing ssh keys

In some cases, you may want to give temporary access to a user without sharing the SSH keys (since by default PasswordAuthentication is turned off) for the instance. You can do so by temporarily enabling password based authentication and creating a temporary user

1. Log into the EC2 instance using ssh keys and then edit /etc/ssh/sshd_config file as below:-

**************
/etc/ssh/sshd_config
#PermitEmptyPasswords no
#PasswordAuthentication no
PasswordAuthentication yes
**************

2. After enabling the PasswordAuthentication property; bounce the ssh daemon

**************
$sudo service sshd restart
**************

3. Create a temporary user and set the user's password:-

**************
$sudo useradd -d /home/tempuser -m -s /bin/bash tempuser
$sudo passwd tempuser
**************

4. Once the user has completed the work, you can remove user and subsequently turn off the PasswordAuthentication property

**************
$sudo userdel tempuser
**************

Wednesday, May 13, 2015

Quick way to send a file using sendmail on Amazon linux instance

Many times you may want to sent a particular excerpt from a config file or run output from a program to your email for better parsing. The "sendmail" provides a handy way to do that quickly and you can follow the steps below:-

1. Install uuencode utility to encode the file as binary

***********
$sudo yum install -y sharutils
***********

2. Now if you try to use sendmail as "ec2-user", you will see a permissions issue as below:-

***********
$uuencode ~/testfile.txt | sendmail -s "test" test@example.com
sudo sudo uuencode ~/testfile.txt | sendmail -s "test" test@example.com
WARNING: RunAsUser for MSP ignored, check group ids (egid=500, want=51)
can not chdir(/var/spool/clientmqueue/): Permission denied
Program mode requires special privileges, e.g., root or TrustedUser.
***********

3. You can sudo in a root and try the same command again

***********
# uuencode /home/ec2-user/testfile.txt | sendmail -s "test" test@example.com
#
***********
NOTE - you have to press "Ctrl-D" to have sendmail send the mail.

Tuesday, May 12, 2015

Smallest AWS VPC cidr block where it can be partitioned to public and private subnets

The typical VPC cidr block ranges from /16 to /28. However, if you create VPC with /28 cidr block, then there is not enough number of hosts within that cidr block to partition into private and public subnets (out of the available 16 there are 5 reserved AWS for internal use). For calculating number of available hosts per subnet cidr block, you can use the below tool:-

http://mxtoolbox.com/SubnetCalculator.aspx

If your architecture requires you to have hosts that need to run in private subnets, then you can allocate a /27 VPC cidr block and then create 2 subnets (private, public) within that VPC and now you will have 11 available hosts per subnet. Once you arrive at that number, then you can provision the VPC and separate out the subnets and route tables as below:-

1. Create the VPC with /27 cidr block

**************
$aws ec2 create-vpc --cidr-block 172.168.0.0/27 --query Vpc.VpcId 
"vpc-724b7e17"
**************

2. Create an internet gateway

**************
$aws ec2 create-internet-gateway --query InternetGateway.InternetGatewayId 
"igw-4e68062b"
**************

3. Attach the internet gateway to the VPC

**************
$aws ec2 attach-internet-gateway --internet-gateway-id igw-4e68062b --vpc-id vpc-724b7e17 
**************

4. Create a subnet within the VPC

**************
$aws ec2 create-subnet --vpc-id vpc-724b7e17 --cidr-block 172.168.0.0/28 
{
    "Subnet": {
        "VpcId": "vpc-724b7e17",
        "CidrBlock": "172.168.0.0/28",
        "State": "pending",
        "AvailabilityZone": "us-east-1c",
        "SubnetId": "subnet-fefdcbc4",
        "AvailableIpAddressCount": 11
    }
}

**************

5. Create a second subnet within VPC

**************
$aws ec2 create-subnet --vpc-id vpc-724b7e17 --cidr-block 172.168.0.16/28 
{
    "Subnet": {
        "VpcId": "vpc-724b7e17",
        "CidrBlock": "172.168.0.16/28",
        "State": "pending",
        "AvailabilityZone": "us-east-1c",
        "SubnetId": "subnet-d4faccee",
        "AvailableIpAddressCount": 11
    }
}
**************

6. Create a route table for the VPC

**************
$aws ec2 create-route-table --vpc-id vpc-724b7e17 
{
    "RouteTable": {
        "Associations": [],
        "RouteTableId": "rtb-a4c1c8c1",
        "VpcId": "vpc-724b7e17",
        "PropagatingVgws": [],
        "Tags": [],
        "Routes": [
            {
                "GatewayId": "local",
                "DestinationCidrBlock": "172.168.0.0/27",
                "State": "active",
                "Origin": "CreateRouteTable"
            }
        ]
    }
}
**************

7. Associate the route table with a particular subnet

**************
$aws ec2 associate-route-table --route-table-id rtb-a4c1c8c1 --subnet-id subnet-fefdcbc4 
{
    "AssociationId": "rtbassoc-04458260"
}
**************

8. Create route for a destination cidr block via internet gateway

**************
$aws ec2 create-route --route-table-id rtb-a4c1c8c1 --destination-cidr-block 0.0.0.0/0 --gateway-id igw-4e68062b 
**************

After the above steps, you can create security groups and then creates rules for ingress and egress. Once that is done, you should be good to launch an instance in the private subnet of the above vpc.

Friday, May 8, 2015

Kernel Hardening parameters recommended by Lynis

The defaults for RHEL 7.1 AMI in sysctl are
------------------------------------------------------------
# sysctl kernel.kptr_restrict
kernel.kptr_restrict = 0
# sysctl kernel.sysrq
kernel.sysrq = 16
# sysctl net.ipv4.conf.all.accept_redirects
net.ipv4.conf.all.accept_redirects = 1
# sysctl net.ipv4.conf.all.log_martians
net.ipv4.conf.all.log_martians = 0
# sysctl net.ipv4.conf.all.rp_filter
net.ipv4.conf.all.rp_filter = 0
# sysctl net.ipv4.conf.all.send_redirects
net.ipv4.conf.all.send_redirects = 1
# sysctl net.ipv4.conf.default.accept_redirects
net.ipv4.conf.default.accept_redirects = 1
# sysctl net.ipv4.conf.default.log_martians
net.ipv4.conf.default.log_martians = 0
# sysctl net.ipv4.tcp_timestamps
net.ipv4.tcp_timestamps = 1
# sysctl net.ipv6.conf.all.accept_redirects
net.ipv6.conf.all.accept_redirects = 1
# sysctl net.ipv6.conf.default.accept_redirects
net.ipv6.conf.default.accept_redirects = 1
------------------------------------------------------------

The documentation on each of these parameters is available here:
https://www.kernel.org/doc/Documentation/sysctl/kernel.txt
https://www.kernel.org/doc/Documentation/sysrq.txt
https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
https://www.ietf.org/rfc/rfc1323.txt

------------------------------------------------------------
kptr_restrict:
This toggle indicates whether restrictions are placed on
exposing kernel addresses via /proc and other interfaces.

When kptr_restrict is set to (0), the default, there are no restrictions.

When kptr_restrict is set to (1), kernel pointers printed using the %pK
format specifier will be replaced with 0's unless the user has CAP_SYSLOG
and effective user and group ids are equal to the real ids. This is
because %pK checks are done at read() time rather than open() time, so
if permissions are elevated between the open() and the read() (e.g via
a setuid binary) then %pK will not leak kernel pointers to unprivileged
users. Note, this is a temporary solution only. The correct long-term
solution is to do the permission checks at open() time. Consider removing
world read permissions from files that use %pK, and using dmesg_restrict
to protect against uses of %pK in dmesg(8) if leaking kernel pointer
values to unprivileged users is a concern.

When kptr_restrict is set to (2), kernel pointers printed using
%pK will be replaced with 0's regardless of privileges.
------------------------------------------------------------
sysrq:
SysRq is a key combo you can hit which the kernel will respond to regardless of whatever else it is doing, unless it is completely locked up.
Possible values:
0 - disable sysrq completely
1 - enable all functions of sysrq
>1 - bitmask of allowed sysrq functions (see below for detailed function
      description):
         2 =   0x2 - enable control of console logging level
         4 =   0x4 - enable control of keyboard (SAK, unraw)
         8 =   0x8 - enable debugging dumps of processes etc.
        16 =  0x10 - enable sync command
        32 =  0x20 - enable remount read-only
        64 =  0x40 - enable signalling of processes (term, kill, oom-kill)
       128 =  0x80 - allow reboot/poweroff
       256 = 0x100 - allow nicing of all RT tasks
As we are discussing EC2 instances, physical key presses are not something you would generally expect.
------------------------------------------------------------
accept_redirects - BOOLEAN
Accept ICMP redirect messages.
accept_redirects for the interface will be enabled if:
- both conf/{all,interface}/accept_redirects are TRUE in the case
 forwarding for the interface is enabled
or
- at least one of conf/{all,interface}/accept_redirects is TRUE in the
 case forwarding for the interface is disabled
accept_redirects for the interface will be disabled otherwise
default TRUE (host)
FALSE (router)
------------------------------------------------------------
log_martians - BOOLEAN
Log packets with impossible addresses to kernel log.
log_martians for the interface will be enabled if at least one of
conf/{all,interface}/log_martians is set to TRUE,
it will be disabled otherwise
------------------------------------------------------------
rp_filter - INTEGER
0 - No source validation.
1 - Strict mode as defined in RFC3704 Strict Reverse Path
   Each incoming packet is tested against the FIB and if the interface
   is not the best reverse path the packet check will fail.
   By default failed packets are discarded.
2 - Loose mode as defined in RFC3704 Loose Reverse Path
   Each incoming packet's source address is also tested against the FIB
   and if the source address is not reachable via any interface
   the packet check will fail.

Current recommended practice in RFC3704 is to enable strict mode
to prevent IP spoofing from DDos attacks. If using asymmetric routing
or other complicated routing, then loose mode is recommended.

The max value from conf/{all,interface}/rp_filter is used
when doing source validation on the {interface}.

Default value is 0. Note that some distributions enable it
in startup scripts.
------------------------------------------------------------
send_redirects - BOOLEAN
Send redirects, if router.
send_redirects for the interface will be enabled if at least one of
conf/{all,interface}/send_redirects is set to TRUE,
it will be disabled otherwise
Default: TRUE
------------------------------------------------------------
tcp_timestamps - BOOLEAN
Enable timestamps as defined in RFC1323
TCP timestamps are used to provide protection against wrapped sequence numbers. It is possible to calculate system uptime (and boot time) by analyzing TCP timestamps
------------------------------------------------------------

Most parameter can be modified (with the exception of kptr_restrict) with little impact unless the instance is going to be used for routing/forwarding purposes (such as providing NAT services to other instances).

Wednesday, May 6, 2015

Running periodic host based security auditing using Lynis tool

If you run production systems either on the cloud or even on-premise, there is a need for periodic review and auditing of security of the hosts and the infrastructure to meet compliance requirements. Lynis is an open source security auditing tool that can be run on hosts on periodic basis (as a cron job) and provide the necessary reports for compliance. This utility is a good addition to file integrity and IDS solution like OSSEC.

Lynis can be downloaded from -  https://cisofy.com/lynis/

or can be obtained from github repository using steps below

1. Clone the repository

*************
$sudo git clone https://github.com/CISOfy/lynis
*************

2. To run the audit, simply cd into the directory and run the audit system command

*************
$cd lynis
$sudo ./lynis audit system -Q
*************

The tool outputs the below files for review later:-

- Test and debug information      : /var/log/lynis.log
- Report data                     : /var/log/lynis-report.dat

You can also run the tool as a cron job using --cronjob switch and bash script as detailed in

https://cisofy.com/documentation/lynis/#installation

To check whether you are running the latest version of lynis, you can review the banner of lynis.log

************
[01:59:35] ===---------------------------------------------------------------===
[01:59:35] ### Copyright 2007-2015 - CISOfy, https://cisofy.com ###
[01:59:35] Program version:           2.1.1
[01:59:35] Operating system:          Linux
[01:59:35] Operating system name:     Red Hat
[01:59:35] Operating system version:  Red Hat Enterprise Linux Server release 7.
1 (Maipo)
[01:59:35] Kernel version:            3.10.0
[01:59:35] Kernel version (full):     3.10.0-229.el7.x86_64
[01:59:35] Hardware platform:         x86_64
[01:59:35] Hostname:                  ip-198-162-0-7
[01:59:35] Auditor:                   [Unknown]
[01:59:35] Profile:                   ./default.prf
[01:59:35] Log file:                  /var/log/lynis.log
[01:59:35] Report file:               /var/log/lynis-report.dat
[01:59:35] Report version:            1.0
[01:59:35] -----------------------------------------------------
************

Some of the interesting parts of the report are if you are running Apache or Nginx and some kernel parameter recommendations as well

************
[+] Software: webserver
------------------------------------
  - Checking Apache (binary /usr/sbin/httpd)                  [ FOUND ]
AH00557: httpd: apr_sockaddr_info_get() failed for <ip address>
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message
      Info: Configuration file found (/etc/httpd/conf/httpd.conf)
      Info: No virtual hosts found
    * Loadable modules                                        [ FOUND ]
        - Found 100 loadable modules
          mod_evasive: anti-DoS/brute force                   [ NOT FOUND ]
          mod_qos: anti-Slowloris                             [ NOT FOUND ]
          mod_spamhaus: anti-spam (spamhaus)                  [ NOT FOUND ]
          ModSecurity: web application firewall               [ NOT FOUND ]
  - Checking nginx                                            [ NOT FOUND ]

[+] Software: file integrity
------------------------------------
  - Checking file integrity tools
    - AFICK                                                   [ NOT FOUND ]
    - AIDE                                                    [ NOT FOUND ]
    - Osiris                                                  [ NOT FOUND ]
    - Samhain                                                 [ NOT FOUND ]
    - Tripwire                                                [ NOT FOUND ]
    - OSSEC (syscheck)                                        [ FOUND ]
    - mtree                                                   [ NOT FOUND ]
  - Checking presence integrity tool                          [ FOUND ]

[+] Kernel Hardening
------------------------------------
  - Comparing sysctl key pairs with scan profile
    - kernel.core_uses_pid (exp: 1)                           [ OK ]
    - kernel.ctrl-alt-del (exp: 0)                            [ OK ]
    - kernel.kptr_restrict (exp: 1)                           [ DIFFERENT ]
    - kernel.sysrq (exp: 0)                                   [ DIFFERENT ]
    - net.ipv4.conf.all.accept_redirects (exp: 0)             [ DIFFERENT ]
    - net.ipv4.conf.all.accept_source_route (exp: 0)          [ OK ]
    - net.ipv4.conf.all.bootp_relay (exp: 0)                  [ OK ]
    - net.ipv4.conf.all.forwarding (exp: 0)                   [ OK ]
    - net.ipv4.conf.all.log_martians (exp: 1)                 [ DIFFERENT ]
    - net.ipv4.conf.all.mc_forwarding (exp: 0)                [ OK ]
    - net.ipv4.conf.all.proxy_arp (exp: 0)                    [ OK ]
    - net.ipv4.conf.all.rp_filter (exp: 1)                    [ DIFFERENT ]
    - net.ipv4.conf.all.send_redirects (exp: 0)               [ DIFFERENT ]
    - net.ipv4.conf.default.accept_redirects (exp: 0)         [ DIFFERENT ]
    - net.ipv4.conf.default.accept_source_route (exp: 0)      [ OK ]
    - net.ipv4.conf.default.log_martians (exp: 1)             [ DIFFERENT ]
    - net.ipv4.icmp_echo_ignore_broadcasts (exp: 1)           [ OK ]
    - net.ipv4.icmp_ignore_bogus_error_responses (exp: 1)     [ OK ]
    - net.ipv4.tcp_syncookies (exp: 1)                        [ OK ]
    - net.ipv4.tcp_timestamps (exp: 0)                        [ DIFFERENT ]
    - net.ipv6.conf.all.accept_redirects (exp: 0)             [ DIFFERENT ]
    - net.ipv6.conf.all.accept_source_route (exp: 0)          [ OK ]
    - net.ipv6.conf.default.accept_redirects (exp: 0)         [ DIFFERENT ]
    - net.ipv6.conf.default.accept_source_route (exp: 0)      [ OK ]

************


Tuesday, May 5, 2015

Installing and configuring OSSEC for host based intrusion detection system (HIDS) on RHEL 7

OSSEC is an Open Source Host-based Intrusion Detection System that performs log analysis, file integrity checking, policy monitoring, rootkit detection, real-time alerting and active response. You can get the latest version from their site - http://www.ossec.net/

To install a local version of OSSEC on RHEL 7, you can follow the steps below:-

1. Install gcc compiler and also "wget" package

$sudo yum -y install gcc wget

2. Download latest version of OSSEC

$wget -U ossec http://www.ossec.net/files/ossec-hids-2.8.1.tar.gz

3. Uncompress the source into a folder

$tar xvfz ossec-hids-2.8.1.tar.gz

4. change directory to the unzipped ossec folder

$cd /home/ec2-user/ossec-hids-2.8.1

5. Execute the install.sh script

$sudo ./install.sh

6. Enter responses to the questions asked by OSSEC

*************
(en/br/cn/de/el/es/fr/hu/it/jp/nl/pl/ru/sr/tr) [en]:
1- What kind of installation do you want (server, agent, local, hybrid or help)? local
2- Setting up the installation environment.
  - Choose where to install the OSSEC HIDS [/var/ossec]:/var/ossec
3- Configuring the OSSEC HIDS.
  3.1- Do you want e-mail notification? (y/n) [y]:y
     - What's your e-mail address? test@example.com
     - We found your SMTP server as: mail.example.com.
     - Do you want to use it? (y/n) [y]:
     --- Using SMTP server:  mail.example.com.
  3.2- Do you want to run the integrity check daemon? (y/n) [y]:y
     - Running syscheck (integrity check daemon).
  3.3- Do you want to run the rootkit detection engine? (y/n) [y]:y
     - Running rootcheck (rootkit detection).
  3.4- Active response allows you to execute a specific command based on the events received.
     - Do you want to enable active response? (y/n) [y]:y
       Active response enabled.
....
Accept defaults for the rest
*************

7., You can start the ossec services by running

$sudo /var/ossec/bin/ossec-control start
Starting OSSEC HIDS v2.8 (by Trend Micro Inc.)...
ossec-maild already running...
ossec-execd already running...
ossec-analysisd already running...
ossec-logcollector already running...
ossec-syscheckd already running...
ossec-monitord already running...
Completed.

8. To stop the ossec services, you can run

$sudo /var/ossec/bin/ossec-control stop
Killing ossec-monitord ..
Killing ossec-logcollector ..
Killing ossec-syscheckd ..
Killing ossec-analysisd ..
Killing ossec-maild ..
Killing ossec-execd ..
OSSEC HIDS v2.8 Stopped

9. Important configuration files can below located in the below folders:-

Rules - /var/ossec/rules
Configuration - /var/ossec/etc/ossec.conf
logs - /var/ossec/logs/ossec.log

10. In case of errors sending email via smtp, you will see the below errors in /var/ossec/logs/ossec.log

*************
2015/05/06 01:43:42 ossec-maild(1223): ERROR: Error Sending email to <ip address> (smtp server)
2015/05/06 01:44:22 ossec-maild(1223): ERROR: Error Sending email to  <ip address>(smtp server)

*************

NOTE - for similar steps on ubuntu you can refer to - https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ossec-security-notifications-on-ubuntu-14-04

Quick 10 min install of LAMP stack on RHEL 7

1. Install gcc on the machine if you want to compile any native applications

************
$sudo yum -y install gcc
************

2. In RHEL7, red hat decided that it will bundle mariaDB instead of mysql - https://mariadb.com/blog/rhel7-transition-mysql-mariadb-first-look
"MariaDB is the default implementation of MySQL in Red Hat Enterprise Linux 7. MariaDB is a community-developed fork of the MySQL database project, and provides a replacement for MySQL. MariaDB preserves API and ABI compatibility with MySQL and adds several new features; for example, a non-blocking client API library, the Aria and XtraDB storage engines with enhanced performance, better server status variables, and enhanced replication.
Detailed information about MariaDB can be found at https://mariadb.com/kb/en/what-is-mariadb-55/."
************
$sudo yum install mariadb-server mariadb mariadb-devel
************

3. Secure the mariadb installation similar to mysql one

************
$sudo mysql_secure_installation
************

4. Install Apache httpd from package manager

************
$sudo yum install httpd
************
NOTE - all the apache httpd conf files and logs files should be under /etc/httpd/conf and /etc/httpd/logs folder. The web root should be under /var/www/html folder.

5. Add Apache httpd to systemd services to start on boot

************
$sudo systemctl enable httpd.service
************

6. Start the Apache httpd service

************
$sudo systemctl start httpd.service
************

7. Test whether the httpd is rendering the default landing page

************
$curl -vvv http://localhost/
************

8. Install php and its dependencies

************
$sudo yum install php php-mysql php-gd php-pear php-pgsql
************

9. Restart Apache httpd for loading php modules

************
$sudo systemctl restart httpd.service
************

10. Test with phpinfo page

************
$sudo vi /var/www/html/test.php

<?php
   phpinfo(INFO_GENERAL);
?>
************

11. Test whether phpinfo page is returning valid results

************
$curl -vvv http://localhost/test.php
************

Monday, May 4, 2015

Securely delete files on Windows machine using Sysinternals - "SDelete"

Download the "SDelete" utility from Microsoft site - https://technet.microsoft.com/en-us/sysinternals/bb897443.aspx

After downloading extract the contents into a folder such as "C:\SDelete". SDelete implements the Department of Defense clearing and sanitizing standard DOD 5220.22-M.

SDelete is a command line utility that takes a number of options. In any given use, it allows you to delete one or more files and/or directories, or to cleanse the free space on a logical disk. SDelete accepts wild card characters as part of the directory or file specifier.
Usage: sdelete [-p passes] [-s] [-q] <file or directory> ...
sdelete [-p passes] [-z|-c] [drive letter] ...
-a Remove Read-Only attribute.
-c Clean free space.
-p passes Specifies number of overwrite passes (default is 1).
-q Don't print errors (Quiet).
-s or -r Recurse subdirectories.
-z Zero free space (good for virtual disk optimization).

To execute, you can run "SDelete" from the folder where the executable has been downloaded:-

*********************
C:\SDelete>sdelete -p 10 -s -a "c:\test"
SDelete - Secure Delete v1.61
Copyright (C) 1999-2012 Mark Russinovich
Sysinternals - www.sysinternals.com

SDelete is set for 10 passes.

c:\test\test.docx...Scanning file: Reached the end of the file.

c:\test\test.pptx...deleted.
c:\test\test.rar...Scanning file: Reached the end of the file.

c:\test\sample\test-recursive.txt...Scanning file: Reached the end of the file.

Zeroing free space to securely delete compressed files: 10%

**********************

As an alternative, you can use another freeware "FileShredder" - http://www.fileshredder.org/

In this utility, you can select different algorithms for deletion as below:-




Thursday, April 30, 2015

AlertLogic Whitepaper: Understanding AWS Shared Security Model

As you may know AWS shares the responsibility with the consumer of their IaaS services in terms of security. It terms of ownership, below diagram essentially outlines the part that AWS is responsible for and for the part that consumers of their services are responsible for:-



The whitepaper from AlertLogic outlines the below 7 best practices:-

SEVEN BEST PRACTICES FOR CLOUD SECURITY

There are seven key best practices for cloud security that you should implement in order to protect yourself from the next vulnerability and/or wide scale attack:

1. SECURE YOUR CODE
Securing code is 100% your responsibility, and hackers are continually looking for ways to compromise your applications. Code that has not been thoroughly tested and secure makes it all the more easy for them to do harm. Make sure that security is part of your software development lifecycle: testing your libraries, scanningplugins etc.

2. CREATE AN ACCESS MANAGEMENT POLICY
Logins are the keys to your kingdom and should be treated as such. Make sure you have a solid access management policy in place, especially concerning those who are granted access on a temporary basis. Integration of all applications and cloud environments into your corporate AD or LDAP centralized authentication model will help with this process as will two factor authentication.

3. ADOPT A PATCH MANAGEMENT APPROACH
Unpatched software and systems can lead to major issues; keep your environment secure by outlining a process where you update your systems on a regular basis. Consider developing a checking of important procedures, Test all updates to confirm that they do not damage or create vulnerabilities before implementation into your live environment.

4. LOG MANAGEMENT
Log reviews should be an essential component of your organizations security protocols. Logs are now useful for far more than compliance, they become a powerful security tool. You can use log data to monitor for malicious activity and forensic investigation.

5. BUILD A SECURITY TOOLKIT
No single piece of software is going to handle all of your security needs. You have to implement a defence-in-depth strategy that covers all your responsibilities in the stack. Implement IP tables, web application firewalls, antivirus, intrusion detection, encryption and log management.

6. STAY INFORMED
Stay informed of the latest vulnerabilities that may affect you, the internet is a wealth of information. Use it to your advantage, search for the breaches and exploits that are happening in your industry.

7. UNDERSTAND YOUR CLOUD SERVICE PROVIDER SECURITY MODEL
Finally, as discussed get to know your provider and understand where the lines are drawn, and plan accordingly. Cyber attacks are going to happen; vulnerabilities and exploits are going to be identified. By having a solid security in depth strategy, coupled with the right tools and people that understand how to respond you will out you into a position to minimise your exposure and risk.