On many occasions you may find that the original allocated size for the root volume may not be sufficient and you will need to resize the volume. For the purposes of this example, we will begin by creating a new instance with 10g root volume which we will expand to 20g.
1. Create a new instance
*************
$aws ec2 run-instances --image-id ami-12663b7a --count 1 --instance-type t2.micro --key <key_name> --security-group-ids sg-7ad9a61e --subnet-id subnet-4d8df83a --associate-public-ip-address
*************
2. From the output of the "run-instances" command, query for vol-id (vol-292a653f), snapshot-id (snap-a2948fc3) and availability zone (AZ: us-east-1d) of the volume and instance
3. Now create a new 20g volume from the snapshot in step #2
*************
$aws ec2 create-volume --snapshot-id snap-a2948fc3 --size 20 --availability-zone us-east-1d --volume-type gp2
{
"AvailabilityZone": "us-east-1d",
"Encrypted": false,
"VolumeType": "gp2",
"VolumeId": "vol-b8347bae",
"State": "creating",
"Iops": 60,
"SnapshotId": "snap-a2948fc3",
"CreateTime": "2015-04-26T05:05:24.865Z",
"Size": 20
}
*************
1. Create a new instance
*************
$aws ec2 run-instances --image-id ami-12663b7a --count 1 --instance-type t2.micro --key <key_name> --security-group-ids sg-7ad9a61e --subnet-id subnet-4d8df83a --associate-public-ip-address
*************
2. From the output of the "run-instances" command, query for vol-id (vol-292a653f), snapshot-id (snap-a2948fc3) and availability zone (AZ: us-east-1d) of the volume and instance
3. Now create a new 20g volume from the snapshot in step #2
*************
$aws ec2 create-volume --snapshot-id snap-a2948fc3 --size 20 --availability-zone us-east-1d --volume-type gp2
{
"AvailabilityZone": "us-east-1d",
"Encrypted": false,
"VolumeType": "gp2",
"VolumeId": "vol-b8347bae",
"State": "creating",
"Iops": 60,
"SnapshotId": "snap-a2948fc3",
"CreateTime": "2015-04-26T05:05:24.865Z",
"Size": 20
}
*************
4. Attach the expanded volume to another instance
*************
$aws ec2 attach-volume --volume-id vol-b8347bae --instance-id i-6a452e97 --device /dev/xvdf
{
"AttachTime": "2015-04-26T05:08:26.001Z",
"InstanceId": "i-6a452e97",
"VolumeId": "vol-b8347bae",
"State": "attaching",
"Device": "/dev/xvdf"
}
*************
5. Describe instances to check whether the new volume is attached
*************
$aws ec2 describe-instances --filters Name=instance-id,Values=i-6a452e97 --output table
+------------------------+-------------------------------
|| BlockDeviceMappings |
|+---------------------------+--------------------------+
|| DeviceName | /dev/sda1 |
|+---------------------------+--------------------------+
||| Ebs ||
||+----------------------+-----------------------------+|
||| AttachTime | 2015-04-26T04:23:35.000Z ||
||| DeleteOnTermination | True ||
||| Status | attached ||
||| VolumeId | vol-292a653f ||
||+----------------------+-----------------------------+|
|| BlockDeviceMappings |
|+---------------------------+--------------------------+
|| DeviceName | /dev/xvdf |
|+---------------------------+--------------------------+
||| Ebs ||
||+----------------------+-----------------------------+|
||| AttachTime | 2015-04-26T05:08:25.000Z ||
||| DeleteOnTermination | False ||
||| Status | attached ||
||| VolumeId | vol-b8347bae ||
||+----------------------+-----------------------------+|
*************
6. Now log into the EC2 instance where the two volumes are attached and confirm the volume attachments
*************
$ sudo cat /proc/partitions
major minor #blocks name
202 0 10485760 xvda
202 1 1024 xvda1
202 2 10483695 xvda2
202 80 20971520 xvdf
202 81 1024 xvdf1
202 82 10483695 xvdf2
$ ls -l /dev/xvd*
brw-rw----. 1 root disk 202, 0 Apr 26 00:24 /dev/xvda
brw-rw----. 1 root disk 202, 1 Apr 26 00:24 /dev/xvda1
brw-rw----. 1 root disk 202, 2 Apr 26 00:24 /dev/xvda2
brw-rw----. 1 root disk 202, 80 Apr 26 01:08 /dev/xvdf
brw-rw----. 1 root disk 202, 81 Apr 26 01:08 /dev/xvdf1
brw-rw----. 1 root disk 202, 82 Apr 26 01:08 /dev/xvdf2
*************
7. Since the block device has two partitions, you can run fdisk command
*************
$ sudo fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental ph
ase. Use at your own discretion.
Disk /dev/xvda: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
# Start End Size Type Name
1 2048 4095 1M BIOS boot parti
2 4096 20971486 10G Microsoft basic
Disk /dev/xvdf: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00000000
*************
8. check the size of the partions using "lsblk" command and see whether second partition can expand
*************
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 10G 0 disk
+-xvda1 202:1 0 1M 0 part
+-xvda2 202:2 0 10G 0 part /
xvdf 202:80 0 20G 0 disk
+-xvdf1 202:81 0 1M 0 part
+-xvdf2 202:82 0 10G 0 part
**************
9. Now if you try to mount the second ebs volume, you may get an error
**************
$ sudo mount /dev/xvdf2 /vol
mount: /dev/xvdf1 is write-protected, mounting read-only
mount: unknown filesystem type '(null)'
**************
10. You can grep the dmesg output for more information about the above mount error
**************
$ dmesg |tail
[ 5233.363312] XFS (xvdf2): Filesystem has duplicate UUID 6785eb86-c596-4229-85f
b-4d30c848c6e8 - can't mount
**************
11. Since the above error indicate a duplicate uuid for the second volume, you can generate a new uuid
**************
$ sudo xfs_admin -U generate /dev/xvdf2
Clearing log and setting UUID
writing all SBs
new UUID = 59c3b4c4-ca99-45f0-9c25-ffd7bbc93581
**************
NOTE - If you would like to temporarily mount the volume without a uuid, you can run "mount -o nouuid /dev/xvdf2 /vol" command.
12. Verify whether the unique uuid are present for both ebs volumes
**************
$ blkid
/dev/xvda2: UUID="6785eb86-c596-4229-85fb-4d30c848c6e8" TYPE="xfs" PARTUUID="e8c
8ba12-3669-4698-b59b-2db878461f9a"
/dev/xvdf2: UUID="59c3b4c4-ca99-45f0-9c25-ffd7bbc93581" TYPE="xfs" PARTUUID="e8c
8ba12-3669-4698-b59b-2db878461f9a"
**************
13. Verify whether the volume can be understood by xfs_info
**************
$ sudo xfs_info /dev/xvdf2
meta-data=/dev/xvdf2 isize=256 agcount=7, agsize=393216 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=2620923, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
**************
14. unmount the volume before running gdisk or parted to expand the volume
*************
$ sudo umount /dev/xvdf /vol
umount: /dev/xvdf: not mounted
*************
15. Follow the steps outlined in AWS documentation for expanding the volume using gdisk (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage_expand_partition.html#expanding-partition-gdisk)
16. run gdisk on /dev/xvdf
*************
$ sudo gdisk /dev/xvdf
GPT fdisk (gdisk) version 0.8.6
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help): p
Disk /dev/xvdf: 41943040 sectors, 20.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): E804A732-997D-4B6A-B0F4-652EBF839AFB
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 20971486
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 4095 1024.0 KiB EF02
2 4096 20971486 10.0 GiB 0700
Command (? for help): o
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): y
Command (? for help): p
Disk /dev/xvdf: 41943040 sectors, 20.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 922D5EF3-D83A-4395-AEEE-0019A36FB2E0
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 41943006
Partitions will be aligned on 2048-sector boundaries
Total free space is 41942973 sectors (20.0 GiB)
Number Start (sector) End (sector) Size Code Name
Command (? for help): n
Partition number (1-128, default 1): 1
First sector (34-41943006, default = 2048) or {+-}size{KMGTP}: 2048
Last sector (2048-41943006, default = 41943006) or {+-}size{KMGTP}: 4095
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): EF02
Changed type of partition to 'BIOS boot partition'
Command (? for help): n
Partition number (2-128, default 2): 2
First sector (34-41943006, default = 4096) or {+-}size{KMGTP}: 4096
Last sector (4096-41943006, default = 41943006) or {+-}size{KMGTP}: 41943006
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 0700
Changed type of partition to 'Microsoft basic data'
Command (? for help): x
Expert command (? for help): g
Enter the disk's unique GUID ('R' to randomize): E804A732-997D-4B6A-B0F4-652EBF839AFB
The new disk GUID is E804A732-997D-4B6A-B0F4-652EBF839AFB
Expert command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/xvdf.
The operation has completed successfully.
*************
Instead of gdisk, you can also use a tool like "parted" and the steps would be as below:-
*************
$sudo parted /dev/xvdf
GNU Parted 3.1
Using /dev/xvdf
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s
(parted) p
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 41943040s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 2048s 4095s 2048s BIOS boot partition bios_grub
(parted) mkpart Linux 4096s 100%
(parted) p
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 41943040s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 2048s 4095s 2048s BIOS boot partition bios_grub
2 4096s 41940991s 41936896s xfs Linux
(parted) q
Information: You may need to update /etc/fstab.
*************
17. Confirm whether the volume partitions have been resized
*************
$sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 10G 0 disk
├─xvda1 202:1 0 512B 0 part
└─xvda2 202:2 0 10G 0 part /
xvdf 202:80 0 20G 0 disk
├─xvdf1 202:81 0 1M 0 part
└─xvdf2 202:82 0 20G 0 part
*************
18. Now you can mount the volume back to the instance
*************
$mount -n /dev/xvdf2 /vol
*************
19. Now grow the volume partition using xfs_gow utility
*************
$sudo xfs_growfs /dev/xvdf2
meta-data=/dev/xvdf2 isize=256 agcount=14, agsize=393216 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=5242112, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
*************
20. Confirm that the mount point (/vol) is now reflecting the correct size
*************
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 10G 867M 9.2G 9% /
devtmpfs 480M 0 480M 0% /dev
tmpfs 497M 0 497M 0% /dev/shm
tmpfs 497M 13M 484M 3% /run
tmpfs 497M 0 497M 0% /sys/fs/cgroup
/dev/xvdf2 20G 866M 20G 5% /vol
*************
This comment has been removed by a blog administrator.
ReplyDelete