Thursday, June 11, 2015

Badly targetted advertising

The web today is essentially one big advertising stream. Everywhere you go you're bombarded by adverts.

OK, I get that it's necessary. Sites do cost money to run, people who work on them have to get paid. It might be evil, but (in the absence of an alternative funding model) it's a necessary evil.

There's a range of implementations. Some subtle, others less so. Personally, I take note of the unsubtle and brash ones, the sort that actively interfere with what I'm trying to achieve, and mark them as companies I'm less likely to do business with. The more subtle ones I tolerate as the price for using the modern web.

What is abundantly clear, though, is how much tracking of your activities goes on. For example, I needed to do some research on email suppliers yesterday - and am being bombarded with adverts for email services today. If I go away, I get bombarded with adverts for hotels at my destination. Near Christmas I get all sorts of advertising popping up based on the presents I've just purchased.

The thing is, though, that most of these adverts are wrong and pointless. The idea that because I searched for something, or visited a website on a certain subject, might indicate that I would be interested in the same things in future, is simply plain wrong.

Essentially, if I'm doing something on the web, then I have either (a) succeeded in the task at hand (bought an item, booked a hotel), or (b) failed completely. In either case, basing subsequent advertising on past activities is counterproductive.

If I've booked a hotel, then the last thing I'm going to do next is book another hotel for the same dates at the same location. More sensible behaviour for advertisers would be to prime the system to stop advertising hotels, and then advertise activities and events (for which they even know the dates) at my destination. It's likely to be more useful for me, and more likely to get a successful response for the advertiser. Likewise, once I've bought an item, stop advertising that and instead move on to advertising accessories.

And if I've failed in my objectives, ramming more of the same down my throat is going to frustrate me and remind me of the failure.

In fact, I wonder if a better targeting strategy would be to turn things around completely, and advertise random items excluding the currently targeted items. That opens up the possibility of serendipity - triggering a response that I wasn't even aware of, rather than trying to persuade me to do something I already actively wanted to do.

Sunday, June 07, 2015

Building LibreOffice on Tribblix

Having decent tools is necessary for an operating system to be useful, and one of the basics for desktop use is an office suite - LibreOffice being the primary candidate.

Unfortunately, there aren't prebuilt binaries for any of the Solaris or illumos distros. So I've been trying to build LibreOffice from source for a while. Finally, I have a working build on Tribblix.

This is what I did. Hopefully it will be useful to other distros. This is just a straight dump of my notes.

First, you'll need java (the jdk), and the perl Archive::Zip module. You'll need boost, and harfbuzz with the icu extensions. Plus curl, hunspell, cairo, poppler, neon.

Then you'll need to build (look on this page for links to some of this stuff):

  • cppunit-1.13.2
  • librevenge-0.0.2
  • libwpd-0.10.0
  • libwpg-0.3.0
  • libmspub-0.1.2
  • libwps-0.3.1
  • mdds_0.11.2
  • libixion-0.7.0
  • liborcus-0.7.0
  • libvisio-0.1.1

If you don't tell it otherwise, LibreOffice will download these and try to build them itself. And generally these have problems building cleanly, which it's fairly easy to fix while building them in isolation, but would be well nigh impossible when they're buried deep inside the LibreOffice build system

For librevenge, pass --disable-werror to configure.

For libmspub, replace the call to pow() in src/lib/MSPUBMetaData.cpp with std::pow().

For libmspub, remove zlib from the installed pc file (Tribblix, and some of the other illumos other distros, don't supply a pkgconfig file for zlib).

For liborcus, run the following against all the Makefiles that the configure step generates:


For mdds, make sure you have a PATH that has the gnu install ahead of the system install program when running make install.
For ixion, it's a bit more involved. You need some way of getting -pthreads past configure *and* make. For configure, I used:

env boost_cv_pthread_flag=-pthreads CFLAGS="-O -pthreads" CPPFLAGS="-pthreads" CXXFLAGS="-pthreads" configure ...

and for make:

gmake MDDS_CFLAGS=-pthreads

For orcus, it looks to pkgconfig to find zlib, so you'll need to prevent that:

 env ZLIB_CFLAGS="-I/usr/include" ZLIB_LIBS="-lz" configure ...

For libvisio, replace the call to pow() in src/lib/VSDMetaData.cpp with std::pow().

For libvisio, remove zlib and libxml-2.0 from the installed pc file.

If you want to run a parallel make, don't use gmake 3.81. Version 4.1 is fine.

With all those installed you can move on to LibreOffice.

Unpack the main tarball.

chmod a+x bin/unpack-sources
mkdir -p external/tarballs

and then symlink or copy the other tarballs (help, translations, dictionaries) into external/tarballs (otherwise, it'll try downloading them again).

Download and run this script to patch the builtin version of glew.

Edit the following files:

  • svx/
  • sw/
  • vcl/
  • desktop/
  • vcl/

And replace "LINUX" with "SOLARIS". That part of the makefiles is needed on all unix-like systems, not just Linux.

In the file


replace the call to pow() on line 3160 with std::pow()

In the file


replace the call to pow() on line 87 with std::pow()

In the file


You'll need to #undef TRANSPARENT before it's used (otherwise, it picks up a rogue definition from the system).

And you'll need to create a compilation symlink:

mkdir -p  instdir/program
ln -s instdir/program/

This is the configure command I used:

env PATH=/usr/gnu/bin:$PATH \
./configure --prefix=/usr/versions/libreoffice-44 \
--with-system-hunspell \
--with-system-curl \
--with-system-libpng \
--with-system-clucene=no \
--with-system-libxml \
--with-system-jpeg=no \
--with-system-cairo \
--with-system-harfbuzz \
--with-gnu-cp=/usr/gnu/bin/cp \
--with-gnu-patch=/usr/gnu/bin/patch \
--disable-gconf \
--without-doxygen \
--with-system-openssl \
--with-system-nss \
--disable-python \
--with-system-expat \
--with-system-zlib \
--with-system-poppler \
--disable-postgresql-sdbc \
--with-system-icu \
--with-system-neon \
--disable-odk \
--disable-firebird-sdbc \
--without-junit \
--disable-gio \
--with-jdk-home=/usr/jdk/latest \
--disable-gltf \
--with-system-libwps \
--with-system-libwpg \
--with-system-libwpd \
--with-system-libmspub \
--with-system-librevenge \
--with-system-orcus \
--with-system-mdds \
--with-system-libvisio \
--with-help \
--with-vendor="Tribblix" \
--enable-release-build=yes \

and then to make:

env LD_LIBRARY_PATH=/usr/lib/mps:`pwd`/instdir/ure/lib:`pwd`/instdir/sdk/lib:`pwd`/instdir/program \
PATH=/usr/gnu/bin:$PATH \
/usr/gnu/bin/make -k build

(Using 'make build' is supposed to avoid the checks, many of which fail. You'll definitely need to run 'make -k' with a parallel build, because otherwise some of the test failures will stop the build before all the other parallel parts of the build have finished.)

Then create symlinks for all the .so files in /usr/lib/mps in instdir/program, and instdir/program/soffice should start.

Sunday, May 31, 2015

What sort of DevOps are you?

What sort of DevOps are you? Can you even define DevOps?
Nobody really knows what DevOps is, there are almost as many definitions as practitioners. Part of the problem is that the name tends to get tacked onto anything to make it seem trendy. (The same way that "cloud" has been abused.)
Whilst stereotypical, I tend to separate the field into the puritans and the pragmatists.
The puritanical vision of DevOps is summarized by the mantra of "Infrastructure as Code". In this world, it's all about tooling (often, although not exclusively, based around configuration management).
From the pragmatist viewpoint, it's rather about driving organizational and cultural change to enable people to work together to benefit the business, instead of competing with each other to benefit their own department or themselves. This is largely a reaction to legacy departmental silos that simply toss tasks over the wall to each other.
I'm firmly in the pragmatist camp. Tooling helps, but you can use all the tools in the world badly if you don't have the correct philosophy and culture.
I see a lot of emphasis being placed on tooling. Partly this is because in the vendor space, tooling is all there is - vendors frame the discussion in terms of how tooling (in particular, their tool) can improve your business. I don't have a problem with vendors doing this, they have to sell stuff after all, but I regard conflating their offerings with DevOps in the large, or even defining DevOps as a discipline, as misleading at best.
Another worrying trend (I'm seeing an awful lot of this from recruiters, not necessarily practitioners) is the stereotypical notion that DevOps is still about getting rid of legacy operations and having developers carry the pager. This again starts out in terms of a conflict between Dev and Ops and, rather than resolving it by combining forces, simply throws one half of the team away.
Where I do see a real problem is that smaller organizations might start out with only developers, and then struggle to adopt operational practices. Those of us with a background in operations need to find a way to integrate with development-led teams and organizations. (The same problem arises when you have a subversive development team in a large business that's going round the back of traditional operations, and eventually find that they need operational support.)
I was encouraged that the recent DOXLON meetup had a couple of really interesting talks about culture. Practitioners know that this is important, we really need to get the word out.

Where have all the SSDs gone?

My current and previous laptop - that's a 3-year timespan - both had an internal SSD rather than rotating rust. The difference between those and prior systems was like night and day - instant-on, rather than the prior experience of making a cup of coffee while waiting for the old HDD system to stagger into life.

My current primary desktop system is also SSD based. Power button to fully booted is a small number of seconds. Applications are essentially instant - certainly compared to startup times for things like firefox that used to be double-digit seconds before it was ready to go.

(This startup speed changes usage patterns. Who really needs suspend/resume when the system boots in the time it takes to settle comfortably in your chair?)

So I was a little surprised, while browsing in a major high street electronics retailer, to find hardly any evidence of SSDs. Every desktop system had an HDD. Almost all the laptops were HDD based. A couple of the all-in-ones had hybrid drives. SSDs were conspicuous by their absence.

I had actually noticed this trend while looking online. I've just checked the desktops on the Dell site, and there's no sign of a system with an SSD option.

Curious, I asked the shop assistant, who replied that SSDs were far too expensive.

I'm not sure I buy the cost argument. An SSD actually costs the same as an HDD - at least, the range of prices is exactly the same. So the prices will stay unchanged, but obviously the capacity will be quite a bit less. And it looks like the sales pitch is about capacity.

But even there, the capacity numbers are meaningless. It's purely bragging rights, disconnected from reality. With any of the HDD options, you're looking at hundreds of thousands of songs or pictures. Very few typical users will need anything like that much - and if you do, you're going to need to look at redundancy or backup. And with media streaming and cloud-based backup, local storage is more a liability than an asset.

So, why such limited penetration of SSDs into the home computing market?

Thursday, February 12, 2015

How illumos sets the keyboard type

It was recently pointed out that, while the Tribblix live image prompts you for the keyboard type, the installer doesn't carry that choice through to the installed system.

Which is right. I hadn't written any code to do that, and hadn't even thought of it. (And as I personally use a US unix keyboard then the default is just fine for me, so hadn't noticed the omission.)

So I set out to discover how to fix this. And it's a twisty little maze.

The prompt when the live image boots comes from the kbd command, called as 'kbd -s'. It does the prompt and sets the keyboard type - there's nothing external involved.

So to save that, we have to query the system. To do this, run kbd with the -t and -l arguments

# kbd -t
USB keyboard

# kbd -l
layout=33 (0x21)

OK, in the -l output type=6 means a USB keyboard, so that matches up. These are defined in <kbd.h>

#define KB_KLUNK        0x00            /* Micro Switch 103SD32-2 */
#define KB_VT100        0x01            /* Keytronics VT100 compatible */
#define KB_SUN2         0x02            /* Sun-2 custom keyboard */
#define KB_VT220        0x81            /* Emulation VT220 */
#define KB_VT220I       0x82            /* International VT220 Emulation */
#define KB_SUN3         3               /* Type 3 Sun keyboard */
#define KB_SUN4         4               /* Type 4 Sun keyboard */
#define KB_USB          6               /* USB keyboard */
#define KB_PC           101             /* Type 101 AT keyboard */
#define KB_ASCII        0x0F            /* Ascii terminal masquerading as kbd */

That handles the type, and basically everything today is a type 6.

Next, how is the keyboard layout matched. That's the 33 in the output. The layouts are listed in the file

Which are a key-value map of name to number. So what we have is:


And if you check the source for the kbd command, 33 is the default.

Note that the numbers that kbd -s generates to prompt the user with have absolutely nothing to do with the actual type - the prompt just makes up an incrementing sequence of numbers.

So, how is this then loaded into a new system? Well, that's the keymap service, which has a method script that then calls


(yes, it's a twisty maze). That script gets the layout by calling eeprom like so:

/usr/sbin/eeprom keyboard-layout

Now, usually you'll see:


which is fair enough, I haven't set it.

On x86, eeprom is emulated, using the file


So, to copy the keyboard layout from the live instance to the newly
installed system, I need to:

1. Get the layout from kbd -l

2. Parse /usr/share/lib/keytables/type_6/kbd_layouts to get the name that corresponds to that number.

3. Poke that back into eeprom by inserting an entry into bootenv.rc

Oof. This is arcane.

Sunday, February 08, 2015

Tribblix scorecard

I was just tidying up some of the documentation and scripts I used to create Tribblix, including pushing many of the components up to repositories on github.  One of the files I found was a quick sketch of my initial aims for the distro. I'm going to list those below, with a commentary as to how well I've done.
It must be possible to install a zone without requiring external resources
Success. On Tribblix, you don't need a repo to install a whole or sparse root zone. You can also install a partial-root zone that has a subset of the global zone's packages. (Clearly, adding software to a zone that isn't installed in the global zone will require an external source of packages, although it's possible to pre-cache them in the global zone.)
It must be possible to minimize a system, removing all extraneous software including perl, python, java.
Almost successful. There's no need in Tribblix for any of perl, python, or java. There are still pieces of illumos that can drag in perl in particular, but there is work to eliminate those as well. (One corollorary to this aim is that you can freely and arbitrarily replace any of perl, python, or java by completely different versions without constraint.)
It should be possible to upgrade a system from the live CD
In theory, this could be made to work trivially. The live CD contains both a minimalist installed image and additional packages. During installation, the minimalist image is simply copied to the destination, and additional packages added separately. As a space optimization, I don't put the packages in the minimalist image on the iso, as they would never be used during normal installation.
It should be possible to use any filesystem of the user's choice (although zfs is preferred)
Success. Although the default file system at install is zfs, the live CD comes with a ufs install script (which does work, although it doesn't get uch testing) which should be extensible to other file systems. In addition, I've built systems running with an nfs root file system.
It must be possible to select which versions of utilities are to be installed; and to have multiple versions installed simultaneously. It should be possible to define one of those installed versions as the default.
Partially successful. The way this is implemented is that certain utilities are installed under /usr/versions, and it's possible to have different versions co-exist. I've tried various levels of granularity, so it's a work in progress. For example, OpenJDK has a different package for each update (so you can have 7u71 and 7u75 installed together), whereas for python I just have 2.7 (for all values of 2.7.x) and 3.4. There are symlinks in the regular locations so they're in the regular search path, which can be modified to refer to a different version if the administrator so desires, but there isn't a built-in mechanism such as mediators - by default, the most recently installed version wins.
It must be possible to install by functionality rather than requiring users to understand packages. (Probably implemented as package groups or clusters.)
Success. I define overlays of useful functionality to hide packages, and the zap utility, the installer, and the zone tools are largely based around overlays as the fundamental unit of installation.
It should be possible to use small system configurations. Requiring over 1G of memory just to boot isn't acceptable.
Success. Tribblix will install and run in 512M (with limitations - making heavy use of java or firefox will be painful). I have booted the installer in slightly less, and run in 256M (it's pretty easy to try this in an emulator such as VirtualBox), but the way the installer works, by loading a full image archive into memory, will limit truly small configurations, as the root archive itself is almost 200M.
It should be possible to customize what's installed from the live CD (to a limited extent, not arbitrarily)
Success. You can define the installed software profile by choosing which overlays should be installed.

Overall, all of those initial aims have been met, or could easily be met by making trivial adjustments. I think that's a pretty good scorecard overall.

In addition to the general aims for Tribblix, I wrote down a list of milestones against which to measure progress. The milestones were more about implementation details rather than general aims (things like "migrate from gcc3 to gcc4", "build illumos from scratch", "become self-hosting", "create an upgrade mechanism", and "make a sparc version", or "have LibreOffice working"). That's where the "milestone" nomenclature in the Tribblix releases comes from, although I never specified in which order I would attack the milestones, it just makes for a convenient "yes, I got that working" point at which I might put out a new iso for download.

In terms of progress against those milestones, about the only one left to do that's structural is the upgrade capability. It's almost there, but needs more polish. Much of the rest is adding applications. So it's at this point that I can really start to think about producing something that I can call 1.0.

Tuesday, December 23, 2014

Setting up Logical Domains, part 2

In part 1, I talked about the server side of Logical Domains. This time, I'll cover how I set up a guest domain.

First, I have created a zfs pool called storage on the host, and I'm going to present a zvol (or maybe several) to the guests.

I'm going to create a little domain called ldom1.

ldm add-domain ldom1
ldm set-core 1 ldom1
ldm add-memory 4G ldom1
ldm add-vnet vnet1 primary-vsw0 ldom1

Now create and add a virtual disk. Create a dataset for the ldom and a 24GB volume inside it, add it to the storage service and set it as the boot device.

zfs create storage/ldom1
zfs create -V 24gb storage/ldom1/disk0
ldm add-vdsdev /dev/zvol/dsk/storage/ldom1/disk0 ldom1_disk0@primary-vds0
ldm add-vdisk disk0 ldom1_disk0@primary-vds0 ldom1
ldm set-var auto-boot\?=true ldom1
ldm set-var boot-device=disk0 ldom1

Then bind resources, list, and start:

ldm bind-domain ldom1
ldm list-domain ldom1
ldm start-domain ldom1

You can connect to the console by looking at the number under CONS in the list-domain output:

# ldm list-domain ldom1
ldom1            bound      ------  5000    8     4G

# telnet localhost 5000
Connected to localhost.
Escape character is '^]'.

Connecting to console "ldom1" in group "ldom1" ....
Press ~? for control options ..

T5140, No Keyboard
Copyright (c) 1998, 2014, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.6.e, 4096 MB memory available, Serial #83586105.
Ethernet address 0:14:4f:fb:6c:39, Host ID: 84fb6c39.

Boot device: disk0  File and args:
Bad magic number in disk label
ERROR: /virtual-devices@100/channel-devices@200/disk@0: Can't open disk label package

ERROR: boot-read fail


Can't open boot device

{0} ok

Attempt to boot it off the net and it says:

Requesting Internet Address for 0:14:4f:f9:87:f9

which doesn't match the MAC address in the console output. So if you want to jumpstart the box, and that's easy enough, you need to do a 'boot net' first to get the actual MAC address of the LDOM to add it to your jumpstart server.

To add an iso image to boot from:

ldm add-vdsdev options=ro /var/tmp/img.iso iso@primary-vds0
ldm add-vdisk iso iso@primary-vds0 ldom1

At the ok prompt, you can issue the 'show-disks' command to see what disk devices are present. To boot off the iso:

boot /virtual-devices@100/channel-devices@200/disk@1:f

And it should work. This is how I've been testing the Tribblix images for SPARC, by the way.

Setting up Logical Domains, part 1

I've recently been playing with Logical Domains (aka LDOMs, aka Oracle VM Server for SPARC). For those unfamiliar with the technology, it's a virtualization framework built into the hardware of pretty well all current SPARC systems, more akin to VMware than Solaris zones.

For more information, see here, here, or here.

First, why use it? Especially when Solaris has zones. The answer is that it addresses a different set of problems. Individual LDOMs are more independent and much more isolated than zones. You can partition resources more cleanly, and different LDOMs don't have to be at the same patch level (to my mind, it's not that you can have a different level of patches in each LDOM that matters, but that you can do maintenance of each LDOM to different schedules that matters). One key advantage I find is that the virtual switch you set up with LDOMs is much better at dealing with complex network configuration (I have hosts scattered across maybe dozens of VLANs, trying to fake that up on Solaris 10 is a bit of a bind). And some applications don't really get on with zones - I would build new systems around zones, but ill-understood and poorly documented legacy systems might be easier to drop inside an LDOM.

That dealt with, here's how I setup up one of my machines (a T5140, as practice for live deployment on a bunch of replacement T4-1 systems) as an LDOM host. I'll cover setting up the guest side in a second post.

Make sure the firmware is current - these are the minimum revs:

T2 - 7.4.5
T3 - 8.3
T4 - 8.4.2c

Then install the LDOM software.

cd /var/tmp
cd OVM_Server_SPARC-3_1/Install

You'll get asked if you want to launch the configuration assistant after installation. I chose n, and you can run ldmconfig at any later time. (If you want - it's best not to.)

Now need to apply the LDOM patch

svcadm disable -s ldmd
patchadd 150817-02
svcadm enable ldmd

Verify things are working as expected:

# ldm list
primary          active     -n-c--  SP      128   32544M   0.1%  16m

You should see the primary domain.

The next step is to establish default services and configure the control domain.

The necessary services are:

Virtual console
Virtual disk server
Virtual switch service

ldm add-vds primary-vds0 primary
ldm add-vcc port-range=5000-5100 primary-vcc0 primary
ldm add-vsw net-dev=nxge0 primary-vsw0 primary

verify with

ldm list-services primary

And we need to limit the control domain to a limited set of resources. The way I do this (this is just a personal view) is, on a system with N cores, to define N units each with 1 core with 1/N of the total memory. Assign one of those units to the primary domain, and then build the guest domains with 1 or more of those units (sizing as necessary - note that you can resize them on the fly so you don't have to get it perfect first time around). You can get very detailed and start allocating down to individual threads and specific amounts of memory, but it's so much better to just keep it simple.

For a T2/T2+/T3 you need to futz with crypto MAUs. This is unnecessary on later systems.

To show:

ldm list -o crypto primary

To add

ldm set-mau 1 primary

To set the CPU resources of the control domain

ldm set-core 1 primary

(I want to just set 1 core. Allocation best done by cores, not threads.)

Start the reconfig

ldm start-reconf primary

Fix the memory

ldm set-memory 4G primary

Save the config to the SP

ldm add-config initial

verify the config has been saved

ldm list-config

Reboot to activate

shutdown -y -g0 -i6

OK, so that creates a 1-core (8-thread) 4G control domain. And that all seems to work.

Next steps are to configure networking and enable terminal services. From the console (as you're reconfiguring the primary network):

ifconfig vsw0 plumb
ifconfig nxge0 down unplumb
ifconfig nxge0 inet6 down unplumb
ifconfig vsw0 inet netmask broadcast up
ifconfig vsw0 inet6 plumb up
mv /etc/hostname.nxge0 /etc/hostname.vsw0
mv /etc/hostname6.nxge0 /etc/hostname6.vsw0

For a T4, replace nxge with igb.

At this point, you have a machine with minimal resources assigned to the primary domain, which looks for all the world like a regular Solaris box, ready to create guest domains using the remaining resources.

Saturday, October 25, 2014

Tribblix progress

I recently put out a Milestone 12 image for Tribblix.

It updates illumos, built natively on Tribblix. There's been a bit of discussion recently about whether illumos needs actual releases, as opposed to being continuously updated. It doesn't have releases, so when I come to make a Tribblix release I simply check out the current gate, build, and package it. After all, it's supposed to be ready to ship at any time.

Note that I don't maintain a fork of illumos-gate, I build it essentially as-is. This is the same for all the components I build for Tribblix - I keep true to unmodified upstream as much as possible.

The one change I have made is to SVR4 packaging. I've removed the dependency on openssl and wanboot (bug #5188), which is a good thing. It means that you can't use signed SVR4 packages, but I've never encountered one. Nor can pkgadd now directly retrieve a package via http, but the implementation via wanboot was spectacularly dire, and you're much better off using curl or wget, which allows proper repository management (as zap does). Packaging is a little quicker now, but this change also makes it much easier to update openssl in future (it's difficult to update something your packaging system is linked against).

Tribblix is now firmly committed to gcc4 (as opposed to the old gcc3 in OpenSolaris). I've rebuilt gcc to fix visibility support. If you've ever seen 'warning: visibility attribute not supported in this configuration' then you'll have stumbled across this. Basically, you need to ensure objdump is found during the gcc build - either by making sure it's in the path or by setting OBJDUMP to point to it.

I've added a new style of zones - alternate root zones. These are sparse root zones, but instead of inheriting from the global zone you can use an alternate installed image. More on that later.

There's the usual slew of updates to various packages, including the obviously sensitive bash and openssl.

There's an interesting fix to python. I put software that might come in multiple versions underneath /usr/versions and use symlinks so that applications can be found in the normal locations. Originally, /usr/bin/python was a symlink that went to ../versions/python-x.y.x/bin/python. This works fine most of the time. However, if you called it as /bin/python it couldn't find its modules, so the symlink has to be ../../usr/versions/python-x.y.x/bin/python which makes things work as desired.

The package catalogs now contain package sizes and checksums, allowing verification of downloaded packages. I need to update zap to actually use this data, and to retry or resume failed or incomplete downloads. (It's a shame that curl doesn't automatically resume incomplete downloads the way that wget does.)

At a future milestone, upgrades will be supported (regular package updates have worked for a while, I'm talking about a whole distro upgrade here). It's possible to upgrade by hand already, but it requires a few extra workarounds (such as forcing postremove scripts to always exit 0) to make it work properly. I've got most of the preparatory work in place now. Upgrading zones looks a whole lot more complicated, though (and I haven't really seen it done well elsewhere).

Now, off to work on the next update.