There are several considerations to be made before embarking on any particular RAID configuration. As a first step, Amazon documentation provides some good insights
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html
If you have zfs running on linux (the same can be achieved in xfs or ext3/ext4), then there are several nifty features that one would come to like. For e.g creating a RAID10 (mirrored array with striping) can be set up with 6 ebs volumes (NOTE - should ideally be of equal sizes). If you take six 1gig volumes, then you can add an additional capacity of 3gig with RAID10 configuration.
First, we have to create the ebs volumes using AWS CLI:-
$aws ec2 create-volume --size 1 --region us-east-1 --availability-zone us-east-1a --volume-type gp2
{
"AvailabilityZone": "us-east-1a",
"Attachments": [],
"Tags": [],
"VolumeType": "gp2",
"VolumeId": "vol-e7d33aa0",
"State": "creating",
"Iops": 300,
"SnapshotId": null,
"CreateTime": "2015-02-19T22:26:54.738Z",
"Size": 1
}
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html
If you have zfs running on linux (the same can be achieved in xfs or ext3/ext4), then there are several nifty features that one would come to like. For e.g creating a RAID10 (mirrored array with striping) can be set up with 6 ebs volumes (NOTE - should ideally be of equal sizes). If you take six 1gig volumes, then you can add an additional capacity of 3gig with RAID10 configuration.
First, we have to create the ebs volumes using AWS CLI:-
$aws ec2 create-volume --size 1 --region us-east-1 --availability-zone us-east-1a --volume-type gp2
{
"AvailabilityZone": "us-east-1a",
"Attachments": [],
"Tags": [],
"VolumeType": "gp2",
"VolumeId": "vol-e7d33aa0",
"State": "creating",
"Iops": 300,
"SnapshotId": null,
"CreateTime": "2015-02-19T22:26:54.738Z",
"Size": 1
}
NOTE - With GP2 SSDs, the default IOPs performance level is 3 IOPs per GB, with the ability to burst to 3,000 IOPs for 30 minutes. For additional information on ebs volumes and Amazon recommends use of provisioned iops volumes where i/o bound applications are running - "Each volume receives an initial I/O credit balance of 5,400,000 I/O credits, which is enough to sustain the maximum burst performance of 3,000 IOPS for 30 minutes. This initial credit balance is designed to provide a fast initial boot cycle for boot volumes and to provide a good bootstrapping experience for other applications. Volumes earn I/O credits every second at a baseline performance rate of 3 IOPS per GiB of volume size. For example, a 100 GiB General Purpose (SSD) volume has a baseline performance of 300 IOPS."
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html#EBSVolumeTypes_gp2
In general, if your bottleneck is IOPs, provisioned IOPs are going to be a better choice than GP2 in most use cases. Once the burstable IO is expended, you go back down to the default amount of IOPs - for example, on a 100 gig GP2 volume, that'd be 300 IOPs. If your application is pushing 3,000 IOPs and goes down to 300 abruptly, you can experience applications and even the OS hanging due to the sudden drop in IO.
On the other hand, with provisioned IOPs, you can push 4,000 IOPs per volume, all day, every day. It's a bit more expensive, but the performance difference is often worth it."
Once the volumes have been created, they can be attached to the instance using CLI
$aws ec2 attach-volume --volume-id vol-e7d33aa0 --instance-id i-9e169771 --device /dev/xvdb
{
"AttachTime": "2015-02-19T22:45:08.368Z",
"InstanceId": "i-9e169771",
"VolumeId": "vol-e7d33aa0",
"State": "attaching",
"Device": "/dev/xvdb"
}
NOTE - If you create a volume in a different zone than where your instance is running, then you will get an error like - "A client error (InvalidVolume.ZoneMismatch) occurred when calling the AttachVolume operation: The volume 'vol-24de8556' is not in the same availability zone as instance 'i-9e169771'".
After you attach 6 volumes, you should see something like below:-
$ ls -l /dev/sd*
lrwxrwxrwx 1 root root 4 Mar 3 05:50 /dev/sda -> xvda
lrwxrwxrwx 1 root root 5 Mar 3 05:50 /dev/sda1 -> xvda1
lrwxrwxrwx 1 root root 4 Mar 7 18:42 /dev/sdb -> xvdb
lrwxrwxrwx 1 root root 4 Mar 3 20:43 /dev/sdc -> xvdc
lrwxrwxrwx 1 root root 4 Mar 3 20:43 /dev/sdd -> xvdd
lrwxrwxrwx 1 root root 4 Mar 3 20:43 /dev/sde -> xvde
lrwxrwxrwx 1 root root 4 Mar 3 20:43 /dev/sdf -> xvdf
lrwxrwxrwx 1 root root 4 Mar 3 20:43 /dev/sdg -> xvdg
you can also check the details of each disk as below:-
$ sudo fdisk -l /dev/xvdb
Disk /dev/xvdb: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
no pools available
Now we are ready to create RAID10 array using zfs, we can use the below command:-
$ sudo zpool create -f testraid10 mirror sdb sdc mirror sdd sde mirror sdf sdg
now we can check the status of the raid10 pool
$ sudo zpool status
pool: testraid10
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
testraid10 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
you can confirm that pool has been mounted using "df -h"
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 1.5G 6.2G 20% /
devtmpfs 3.7G 80K 3.7G 1% /dev
tmpfs 3.7G 0 3.7G 0% /dev/shm
testraid10 3.0G 0 3.0G 0% /testraid10
To check the io performance of the individual volumes in the pool, you can using iostat command as below:-
$ sudo zpool iostat testraid10 -v
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
testraid10 129K 2.98T 0 0 40 657
mirror 48K 1016G 0 0 21 235
sdb - - 0 0 76 4.71K
sdc - - 0 0 95 4.71K
mirror 33K 1016G 0 0 0 186
sdd - - 0 0 74 4.66K
sde - - 0 0 74 4.66K
mirror 48K 1016G 0 0 18 235
sdf - - 0 0 88 4.71K
sdg - - 0 0 79 4.71K
---------- ----- ----- ----- ----- ----- -----
NOTE - the performance of the RAID10 will inline with the slowest disk in the volume.
With the above steps you will have a functioning RAID10 pool in your Amazon linux using zfs.
No comments:
Post a Comment