As a reminder, pv mode AMIs are deprecated, aren't supported by all instance types, and don't work in all regions. So you really need something that runs in hvm mode.
The first thought might be to convert the existing pv image to a hvm image. I've tried that and, while you can do the conversion, the image doesn't actually work. The problem here is that ZFS has the physical paths of the devices it's installed on embedded in the pool metadata. Changing from pv to hvm mode changes the emulated hardware, in particular the disk paths, so the ZFS pool isn't where it thought it was and the system panics. If you have a mismatch between the disk layout where the pool was created and where you're running you'll get a panic something like this:
If you had console access and could boot from media you could fix this, but AWS doesn't provide that. (And if you could boot from media you could just do a regular install without all the shenanigans involved in producing an AMI.)
So, you have to create the image on a system that looks like EC2. Which means using xen.
Fortunately, this road has been travelled before. These instructions are exactly what you need. They're for OpenIndiana, but will apply to any illumos distribution. And they're the process used by the OpenZFS project to do their testing. (I'll also mention that the OpenZFS folks have put a number of fixes back into illumos that improve the EC2 experience for us.)
I'm not going to repeat those instruction, that would be boring, so I'll talk about what I had to do or change to make those instructions work for me.
I got one of my spare desktop PCs out and installed Ubuntu 16.04 on it. (I must be spoilt by Tribblix, the Ubuntu install was horrendously slow and very high maintenance.) And then installed xen, rebooted as dom0, and set up the bridge networking.
That was my first pothole. There's this thing called systemd that's come along, and it changes the way network configuration is done. Much cussing and googling, but I got it right first time.
Then I discover that there's a new toolstack here. It's all xl not xm, but otherwise seems the same.
I then tried to start a VM, only to be given a completely meaningless and unhelpful error message. Why tell the user what's wrong when you can just vomit a stack trace?
After a bit of head-scratching I worked out that the system didn't actually support hvm mode. If you run xl info and look for virt_caps, it should mention hvm. That's a bit odd, the sticker on the front of the box looks right.
Manufacturers ship hardware with VT-x disabled in the BIOS, it appears. Into the BIOS we go, to find that the relevant settings are greyed out and you need a BIOS password to get into them. Open the box and start looking for jumpers. Fortunately I found a helpful article - the key here was the bit about the jumper being blue, little details like that make all the difference.
OK, so having wiped the BIOS password, gone into the BIOS and enabled VT-x, I go back to xen. Looking at virt_caps now shows hvm, as it should, and my domain starts.
The idea here is that you connect to the console with VNC. Easy enough, but by the time I had got my ssh tunnel set up and started up my VNC client, my VM had gone. I started it again, it starts booting just fine but then issues a few warnings and then a kernel panic. It's all over pretty quick.
In order to catch what it said, I then used vnc2flv. Someone asked me about screen recorders a while back, and I suggested they did what they wanted to do in a vnc session and use vnc2flv to record it. But it's the same here. Once I had the session recorded I can watch the movie and pause it to see what errors it's spitting out.
This, I think, is related to illumos bug 7186. It looks like we can't handle the network presented by newer versions of xen.
To get round this I simply disabled the network interface in the VM definition. Then the VM boots just fine and can be installed. You're a little bit limited in that you can't do updates but, as long as nwam is enabled then it will get itself on the network when you do run it on something that does have a compatible network.
For OmniOS, this means you have to manually enable nwam, as they have networking switched off by default. And remember that you must have networking enabled if you're running on EC2 as there's no other way to access your system.
What you'll also need to ensure at this point is that you have a functional user account you can get in to via ssh. With Tribblix and OpenIndiana you have jack, other distros might need to create a user. You wouldn't want that on a production AMI, of course, but you need to be able to log in to the system the first time in order to complete any configuration and add the various bits of AWS integration that you'll need.
Having got my image installed I followed the instructions through and got an AMI that works just fine.
The configuration file I used is:
builder='hvm'
name='ami-template'
vcpus=1
memory=1024
disk=[ 'file:/var/tmp/tribblix-0m20.1.iso,hdb:cdrom,r',
'file:/root/ami-template.img,xvda,w' ]
boot='d'
vnc=1
vnclisten='0.0.0.0'
vncconsole=1
on_crash='preserve'
xen_platform_pci=1
serial='pty'
on_reboot='destroy'
The one crucial thing here, apart from not having a vif line to create a network, is that you must use xvda for the disk. That's what EC2 will present to you, if you use something else you'll get the same panic on boot that I saw when attempting to convert a pv image.
We're almost done. Next time I'll talk about how to go from something that minimally boots up to something that's done well.