• disk
  • harddrive
  • Linux
  • virtualization
  • vmware

Increase disk size of Ubuntu guest running in VMware

A while ago I created a virtual machine (VM) under VMware 5.1 with Ubuntu as the guest OS. I wasn't giving the task my full attention and I made a couple choices without thinking when setting this one up. The problem I ended up with is that I only allocated about 10GB to the VM which, while certainly enough for the initial task, was nowhere near enough space as I started using the machine more heavily. Its easy enough in VMware to just allocate more disk space to the VM, but you still need to configure the guest to use that extra space.

If you're familiar with Windows, you might know that ever since Vista (2006) you can change the size of a dynamic disk in the Disk Management GUI in Computer Management where you can right-click on a volume and click Extend Volume on a live mounted volume. This is only possible if the disk has been set as (or converted to) a dynamic volume. "...basic partitions are confined to one disk and their size is fixed. Dynamic volumes allow to adjust size and to add more free space either from the same disk or another physical disk." The dynamic disk was introduced in Windows 2000, but is not supported on portable computers and indeed, the option to convert a disk to a dynamic disk isn't even available on a portable computer.

Turns out that Linux has had logical disk management functionality a bit longer (1998). I find the fact that I've never had a need to learn anything about this feature until now mildly interesting.

When I set up my Ubuntu guest I had chosen to use LVM (actually in Ubuntu 12.04 LTS its the lvm2 package) which stands for Logical Volume Manager. I remember seeing the option during setup, vaugely wondering if I should think about it a bit, maybe learn something... and having my attention snapped elsewhere decided to just click OK and move on. To be honest, I haven't really ever had any need to care whether I used LVM or not and in most cases where it was an option I'm sure I already have used it, but I don't often need to go mucking around in my partition table since I'm not running any large hard disk farms with hot swappable drives on linux (and certainly not at home) and by the time I need to think about increasing partition size I'm usually buying a new disk anyway. As I'm finding myself running more VMs lately I'm sure that will begin to change.

Since I really only wanted to extend a Logical Volume that already existed, which was in a Volume Group that already existed, I wasted a bunch of time trying to create new partitions and new volumes (following all the tutorials I read) which I ultimately had to learn how to destroy. All I really needed was to "extend" the root partition and "resize" the partition table on it to match its new extent. I was able to use the following command to "extend" the drive to match the allocation I'd set in VMware for the machine foo while it was mounted and running. nate@foo:~$ sudo lvextend -L30G /dev/mapper/foo-root

[sudo] password for nate:
 Extending logical volume root to 30.00 GiB
 Logical volume root successfully resized

But when I tried to complete the next step, running resize2fs, even though LVM is supposed to work on a mounted disk, I got an error saying I needed to unmount the disk when I tried to run it. Turns out there is an online resizing kernel patch which you can try, but its a use-at-your-own-risk thing, and we're talking about my data here! In my case the partition I was to resize has the OS on it, so to unmount the disk I ended up powering the VM down and booting up with a live linux ISO so the drive wouldn't be mounted and I could mess with it from the live ISO's terminal. The live ISO I had didn't have the lvm tools installed, so had to install them first: ubuntu@ubuntu:~$ sudo apt-get install lvm2 When I tried to run resize2fs this time I got the following advice: ubuntu@ubuntu:~$ sudo resize2fs /dev/foo/root

resize2fs 1.42 (29-Nov-2011)
Please run 'e2fsck -f /dev/foo/root' first.
ubuntu@ubuntu:~$ sudo e2fsck -f /dev/foo/root
e2fsck 1.42 (29-Nov-2011)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking ground summary information
/dev/foo/root: 361990/1310720 files (0.9% non-contiguous), 2065702/52420 blocks

So that I could then run: ubuntu@ubuntu:~$ sudo resize2fs /dev/foo/root

resize2fs 1.42 (29-Nov-2011)
Resizing the filesystem on /dev/foo/root to 7864320 (4k) blocks.
The filesystem on /dev/foo/root is now 7864320 blocks long.
Rebooting, and checking: nate@foo:~$ df -h

Filesystem             Size Used Avail Use% Mounted on
/dev/mapper/foo-root    30G 7.6G   21G  27% /

Now I've got the space I needed!