Tuesday, May 26, 2015

Creating XFS RAID10 on Amazon Linux

Typically AWS support recommends sticking with ext(x) based file system but for performance reasons you may want to create a RAID10 based on XFS file system. In order to create RAID10 based XFS, you can follow the steps below:-

1. Create a Amazon linux instance within a subnet in a VPC

************
$aws ec2 run-instances --image-id ami-1ecae776 --count 1 --instance-type t2.micro --key-name aminator --security-group-ids sg-7ad9a61e --subnet-id subnet-4d8df83a --associate-public-ip-address
************

2. Create ebs volumes. For RAID10, you will need 6 block storage devices created similar to the one shown below

************
$aws ec2 create-volume --size 1 --region us-east-1 --availability-zone us-east-1d --volume-type gp2
************
NOTE - The ebs volumes must be in the same region and availability zone as the instance. 

3.  Attach the created volumes to the instance as shown below

************
$aws ec2 attach-volume --volume-id vol-c33a982d --instance-id i-120d96c2 --device /dev/xvdb
************

4. Confirm that the devices have been attached successfully

************
$ ls -l /dev/sd*
lrwxrwxrwx 1 root root 4 May 27 00:44 /dev/sda -> xvda
lrwxrwxrwx 1 root root 5 May 27 00:44 /dev/sda1 -> xvda1
lrwxrwxrwx 1 root root 4 May 27 00:57 /dev/sdb -> xvdb
lrwxrwxrwx 1 root root 4 May 27 00:57 /dev/sdc -> xvdc
lrwxrwxrwx 1 root root 4 May 27 00:58 /dev/sdd -> xvdd
lrwxrwxrwx 1 root root 4 May 27 00:59 /dev/sde -> xvde
lrwxrwxrwx 1 root root 4 May 27 00:59 /dev/sdf -> xvdf
lrwxrwxrwx 1 root root 4 May 27 01:00 /dev/sdg -> xvdg
************

5. Check the block device I/O characteristics using "fdisk"

************
 $sudo fdisk -l /dev/xvdc

Disk /dev/xvdc: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
************

6. Create RAID10 using "mdadm" command

************
$sudo mdadm --create --verbose /dev/md0 --level=raid10 --raid-devices=6 /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde /dev/xvdf /dev/xvdg
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 1047552K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
************
NOTE - Incase, only striping or mirroring is required, then you can specify either "raid0" or "raid1" for the "level" parameter

7. Confirm that the raid10 has been created successfully

************
$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE   MOUNTPOINT
xvda    202:0    0   8G  0 disk
+-xvda1 202:1    0   8G  0 part   /
xvdb    202:16   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdc    202:32   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdd    202:48   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvde    202:64   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdf    202:80   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdg    202:96   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
************

8. Since Amazon linux does not come with mkfs.xfs program, you will have to install "xfsprogs" program from the package manager

************
$sudo yum install -y xfsprogs
$ ls -la /sbin/mkfs*
-rwxr-xr-x 1 root root   9496 Jul  9  2014 /sbin/mkfs
-rwxr-xr-x 1 root root  28808 Jul  9  2014 /sbin/mkfs.cramfs
-rwxr-xr-x 4 root root 103520 Feb 10 19:17 /sbin/mkfs.ext2
-rwxr-xr-x 4 root root 103520 Feb 10 19:17 /sbin/mkfs.ext3
-rwxr-xr-x 4 root root 103520 Feb 10 19:17 /sbin/mkfs.ext4
-rwxr-xr-x 1 root root 328632 Sep 12  2014 /sbin/mkfs.xfs
************

9. Create XFS file system on RAID10 volume

************
$ sudo mkfs.xfs /dev/md0
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md0               isize=256    agcount=8, agsize=98176 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0
data     =                       bsize=4096   blocks=785408, imaxpct=25
         =                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
************

10. Create a mount point to mount the raid device

************
$sudo mkdir /mnt/md0
************

11. Mount the raid volume to the mount point

************
$sudo mount -t xfs /dev/md0 /mnt/md0
************

12. Confirm the mount has been successful using "df" command

************
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  1.1G  6.6G  14% /
devtmpfs        490M   88K  490M   1% /dev
tmpfs           499M     0  499M   0% /dev/shm
/dev/md0        3.0G   33M  3.0G   2% /mnt/md0
************

13. Check the I/O characteristics of the RAID10 volume

************
$sudo fdisk -l /dev/md0

Disk /dev/md0: 3218 MB, 3218079744 bytes, 6285312 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
************

14. To mount the volume on system bootup, you can map it to /etc/fstab and add the volume.

************
$ sudo vi /etc/fstab 
$ cat /etc/fstab
#
LABEL=/     /           ext4    defaults,noatime  1   1
tmpfs       /dev/shm    tmpfs   defaults        0   0
devpts      /dev/pts    devpts  gid=5,mode=620  0   0
sysfs       /sys        sysfs   defaults        0   0
proc        /proc       proc    defaults        0   0
/dev/md0    /mnt/md0    xfs     defaults,nofail 0   2
************

15. Run "mount -a" to confirm that there are no errors in the fstab

************
$ sudo mount -a
************

You could follow similar set of steps for setting up an ext4 based raid volume as per AWS docs link below:-

No comments:

Post a Comment