Sunday, October 25, 2020

Tribblix on Vultr with Block Storage

I wrote a while back about installing Tribblix on Vultr using iPXE.

That all still works perfectly with the m23.2 release. And it's pretty quick too. It's even quicker on the new "High Frequency" servers, that appear to be a 3.8GHz Skylake chip and NVME storage.

One of the snags with many of the smaller cloud providers is that they don't necessarily have a huge choice of instance configurations. I'm not saying I necessarily want the billions of choices that AWS provide, but the instances are fairly one dimensional - CPU, memory, and storage aren't adjustable independently. This is one of the reasons I use AWS, because I can add storage without having to pay for extra CPU or memory.

One option here is to take a small instance, and add extra storage to it. That's what I do on AWS, having a small root volume and adding a large EBS volume to give me the space I need. This isn't - yet - all that commonly available on other providers.

Vultr do have it as an option, but at the moment it's only available in New York (NJ). Let's deploy an instance, and select NY/NJ as the location.

Scroll down and choose the "Upload ISO" tab, select the iPXE radio button, and put in the ipxe.txt script location for the version of Tribblix you want to install.

I've chosen the 1GB instance size, which is ample. Deploy that, and then connect to the console.


View the console and watch the install. This is really quick if you deploy in London, and isn't too bad elsewhere in Europe, as the Tribblix server I'm loading from is in London. Transferring stuff across the Atlantic takes a bit longer.

Then run the install. This is just ./live_install.sh -G c2t0d0


It will download a number of packages to finish the install (these are normally loaded off the ISO if you boot that, but for an iPXE install it pulls them over the network).

Reboot the server and it will boot normally off disk.

Go to the Block Storage tab, and Add some block storage. I'm going to add 50GB just to play with.


Now we need to connect it to our server.

This isn't quite as obvious as I would have liked. Click on the little pencil icon and you get to the "Manage Block Storage" page. Select the instance you want to attach it to, and hit Attach. This will restart the server, so make sure you're not doing anything.

The documentation talks about /dev/vdb and the like, which isn't the way we name our devices. As we're using vioblk, this comes up as c3t0d0 (the initial boot drive is c2t0d0).


We can create a pool

zpool create -O compression=lz4 store c3t0d0

This took a little longer than I expected to complete.

And I can do all the normal things you would expect with a zfs pool.

If you go to Block Storage and click the pencil to Manage it, the size is clickable. I clicked that, and changed the size to 64GB.

Like resizing an EBS volume on AWS, there doesn't seem to be a way to persuade illumos to rescan the devices to spot that the size has changed. You have to reboot.

Only reboot doesn't appear to be enough. It says on the Vultr console "You will need to restart your server via the control panel before changes will be visible." and it appears to be correct on that.

(This is effectively power-cycling it, which is presumably necessary to propagate the change through the whole stack properly.)

After that, the additional space is visible, as you can see from the extra 14G in the EXPANDSZ column:

 

And you can expand the pool using 'online -e'

zpool online -e store c3t0d0

This caused me a little bit of trouble. This appeared to generate an I/O error, lots of messages, and a hung console. I had to ssh in, clear the pool, and run a scrub, before things looked sane. Expanding the pool then worked and things look OK.

Generally, block device resize appears to be possible, but is still a bit rough round the edges.

No comments: