Sunday, March 15, 2026

Observations on Tribblix m39

 I've just announced the latest release of Tribblix - m39, now available for download and upgrades.

This follows the "when I feel like it" release model. Which means that (a) I have some potentially breaking changes that need a release boundary to navigate, and (b) enough time has passed that I feel the need to update illumos to pick up any recent changes. There's no hard timescale, but for regular releases I would have thought 3-6 months would be about the right ballpark.

What's new this time around? There's the usual dry list of updates, which could be much longer (anything updated multiple times only gets listed the once, and all the python and perl module updates are missing entirely).

Lets pick some of those updates apart.

The libtiff update was a large one. Not because the update itself was difficult, but because the shared library SONAME got updated. For the time being (and for an indeterminate length of time) I'll ship both the old and new shared libraries, but I've rebuilt (almost) everything against the new version. That's one reason for this being done on an upgrade boundary - to force the breaking change and all the updates associated with it to take place at once.

There's a similar, but much smaller, story associated with OpenEXR.

For OpenSSL, there are a couple of changes. The first is that it's bumped from the 3.0.x series to the 3.5.x series. It's all binary compatible, so doesn't need the world to be rebuilt, but again the reason for pushing it on a release boundary is to ensure that nothing subsequently built against the new version is installed on a system with the old version. The second change is that the surfaced API is now 3.0, rather than the prior 1.1.1.

It wasn't strictly necessary to have an OpenSSH update tied to a release, but that got rolled in. My very first test triggered the post-quantum warning, because one of my build servers is deliberately running a much older version of Tribblix with a much older SSH server.

The underlying illumos gate build also has additional patches for NFSv4.1 - specifically the backchannel fixes in 16390. Hopefully this will make it into regular illumos soon, but it seemed like an excellent feature to get baked in.

I also patch illumos, as I did last time, for a larger range of pids, remove the y2038 clamp in ZFS, and tune the network stack for the 21st century.

There's one Tribblix feature that would be worth talking about in this release - appstack zones, which allow you to build a zone running an application, and do basic configuration of it, all with a simple one-line command. I'll talk about that separately, as it's almost but not quite ready for wider adoption.

Sunday, February 15, 2026

Enabling IPv6 on EC2

I've actually been using IPv6 on and off since the late 1990s - there was an addon for Solaris 2.6 that we installed on a bunch of test machines. It worked great, but wasn't something we ever did properly in production because at the time none of our customers had IPv6.

I've wanted to use IPv6 more, but my home ISP has never had it. Until recently, when I noticed my main machine (running Tribblix, naturally) has an Ipv6 address and was using it. At that point, investigating and testing IPv6 becomes a lot more interesting.

Some of the Tribblix web servers (specifically the pkgs and iso download servers) are hosted in the cloud. I do this for testing and dogfooding, so I know that Tribblix works on those cloud environments.

The iso machine is on Digital Ocean; sadly Digital Ocean don't support IPv6 for custom images, for no good reason that I can see. But the pkgs server is on AWS, and they do support IPv6, but it wasn't enabled.

My first test here was actually to see if I could launch an EC2 instance with an IPv6 address, using the aws cli:

aws ec2 run-instances --ipv6-address-count 1 ...

and this failed, because of the fact that IPv6 wasn't enabled where I was trying to launch it. So I needed to do a few steps to make it work, and this is largely to remind me for when I need to do it again.

First, you need to associate an IPv6 block of addresses with the VPC you're using. Go to the VPC in the console, find something that says Edit CIDRs and, in that, Add a new IPv6 CIDR. Just choose an Amazon provided one and you're good. That should give you a /56 block.

You then need to go into each of the subnets in that VPC and associate an IPv6 CIDR block with it. Find the Edit IPV6 CIDRs button, go into that, and Add IPv6 CIDR. You can add the entire block to one subnet if you want, but normally you can break it down. Under the allocation are some arrows - the up and down arrows change the size of the block, I just went down one to a /60. By default, the starting address of the subnet block is the same as the VPC block - for additional subnets you'll need to use the little right arrow to change the start address, as you can't associate overlapping blocks.

I also went into the subnet settings to Enable auto-assign IPv6 address.

What you then need to do is go to the route table (either from the subnet or the VPC) and add a route for ::/0 with the target being the internet gateway - there should only be the one gateway, the same as used for IPv4. If you don't add the route to the route table you'll get IPv6 addresses but you won't be able to talk to anything outside the VPC over IPv6.

With that, launch a new instance and you'll get IPV6 working nicely.

Nothing to do with EC2, but the one other thing I needed to do was add an additional IPV6 listen directive to each server in my nginx config, as nginx will only listen on IPv4 by default.