Sunday, October 25, 2020

Tribblix on Vultr with Block Storage

I wrote a while back about installing Tribblix on Vultr using iPXE.

That all still works perfectly with the m23.2 release. And it's pretty quick too. It's even quicker on the new "High Frequency" servers, that appear to be a 3.8GHz Skylake chip and NVME storage.

One of the snags with many of the smaller cloud providers is that they don't necessarily have a huge choice of instance configurations. I'm not saying I necessarily want the billions of choices that AWS provide, but the instances are fairly one dimensional - CPU, memory, and storage aren't adjustable independently. This is one of the reasons I use AWS, because I can add storage without having to pay for extra CPU or memory.

One option here is to take a small instance, and add extra storage to it. That's what I do on AWS, having a small root volume and adding a large EBS volume to give me the space I need. This isn't - yet - all that commonly available on other providers.

Vultr do have it as an option, but at the moment it's only available in New York (NJ). Let's deploy an instance, and select NY/NJ as the location.

Scroll down and choose the "Upload ISO" tab, select the iPXE radio button, and put in the ipxe.txt script location for the version of Tribblix you want to install.

I've chosen the 1GB instance size, which is ample. Deploy that, and then connect to the console.


View the console and watch the install. This is really quick if you deploy in London, and isn't too bad elsewhere in Europe, as the Tribblix server I'm loading from is in London. Transferring stuff across the Atlantic takes a bit longer.

Then run the install. This is just ./live_install.sh -G c2t0d0


It will download a number of packages to finish the install (these are normally loaded off the ISO if you boot that, but for an iPXE install it pulls them over the network).

Reboot the server and it will boot normally off disk.

Go to the Block Storage tab, and Add some block storage. I'm going to add 50GB just to play with.


Now we need to connect it to our server.

This isn't quite as obvious as I would have liked. Click on the little pencil icon and you get to the "Manage Block Storage" page. Select the instance you want to attach it to, and hit Attach. This will restart the server, so make sure you're not doing anything.

The documentation talks about /dev/vdb and the like, which isn't the way we name our devices. As we're using vioblk, this comes up as c3t0d0 (the initial boot drive is c2t0d0).


We can create a pool

zpool create -O compression=lz4 store c3t0d0

This took a little longer than I expected to complete.

And I can do all the normal things you would expect with a zfs pool.

If you go to Block Storage and click the pencil to Manage it, the size is clickable. I clicked that, and changed the size to 64GB.

Like resizing an EBS volume on AWS, there doesn't seem to be a way to persuade illumos to rescan the devices to spot that the size has changed. You have to reboot.

Only reboot doesn't appear to be enough. It says on the Vultr console "You will need to restart your server via the control panel before changes will be visible." and it appears to be correct on that.

(This is effectively power-cycling it, which is presumably necessary to propagate the change through the whole stack properly.)

After that, the additional space is visible, as you can see from the extra 14G in the EXPANDSZ column:

 

And you can expand the pool using 'online -e'

zpool online -e store c3t0d0

This caused me a little bit of trouble. This appeared to generate an I/O error, lots of messages, and a hung console. I had to ssh in, clear the pool, and run a scrub, before things looked sane. Expanding the pool then worked and things look OK.

Generally, block device resize appears to be possible, but is still a bit rough round the edges.

Sunday, October 18, 2020

The state of Tribblix, 2020

It's been a funny year, has 2020.

But amongst all this, work on Tribblix continues.

I released milestone 22 back in March. That was a fairly long time in the making, as the previous relase was 9 months earlier. Part of the reason for the lengthy delay was that there wasn't all that much reason for a new release - there are a lot of updated packages, but no big items. I guess the biggest thing is that the default gcc compiler and runtime went from gcc4 to gcc7. (In places, the gcc4 name continues.)

Milestone 23 was the next full release, in July. Things start to move again here - Tribblix fully transitioned from gcc4 to gcc7, as illumos is now a gcc7 build. I updated the MATE desktop, which was the start of moving from gtk2 to gtk3. There's a prettier boot banner, which allows a bit of custom branding.

There's a long-running effort to migrate from Python 2.x to 3.x. This is slow going - there are actually quite a lot of python modules and tools (and things that use python) that still show no sign of engaging with the Python 3 shift. But I'm gradually making sure that everything that can be version 3 is, and removing the python 2 pieces where possible. This is getting a bit more complicated - as of Python 3.8 I've switched from 32-bit to 64-bit. And now they're doing time-based releases there will be a version bump to navigate every year, just to add to the work.

Most of the Tribblix releases have been full upgrades from one version to the next. With the milestone 20 series, I had update releases, which allowed a shared stream of userland packages, while allowing illumos updates to take place. The same model is true of milestone 23 - update 1 came along in September.

With Milestone 23 update 1 we fixed the bhyve CVE. Other than normal updates, I added XView, which suits the retro theme and I've had quite a few people ask for.

Immediately after that (it was supposed to be in 23.1 but wasn't quite ready) came another major update: refreshing the X server stack.

When Tribblix was created, I didn't have the resources to build everything from scratch straight away, so "borrowed" a few components from OpenIndiana (initially 151a8, then 151a9) just to make sure I had enough bits to provide a complete OS. Many of the isolated components were replaced fairly quickly over time, but the X11 stack was the big holdout. It was finally time to build Xorg and the drivers myself. It wasn't too difficult, but to be honest I have no real way to test most of it. So that will all be present in 23.2.

One reason for doing this - and my hand was forced a little here - is that I've also updated Xfce from 4.12 to 4.14. That's also a gtk2 to gtk3 switch, but Xfce 4.14 simply didn't work on the old Xorg I had before.

Something else I've put together - and these are all gtk3 based - is a lightweight desktop, using abiword, geany, gnumeric, grisbi, imagination, and netsurf. You still need a file manager to round out the set, and I really haven't found anything that's lightweight and builds successfully, so at the moment this is really an adjunct to MATE or Xfce.

Alongside all this I've been working on keeping Java OpenJDK working on illumos. They ripped out Solaris support early in the year, but I've been able to put that back. The real killer here was Studio support, and we don't want that anyway (it's not open source, and the binaries no longer run). There are other unix-like variants supported by Java, running on the x86 architecture with a gcc toolchain, just like us, so it shouldn't be that much of a mountain to climb.

Support for SPARC is currently slightly on the back burner, partly because the big changes mentioned above aren't really relevant for SPARC, partly due to less time, partly due to the weather - running SPARC boxes in the home office tends to be more of a winter than a summer pursuit, due to the heat.