Sunday, January 31, 2010

A better terminal, part 2

Thanks for the comments on my search for a better terminal.

First. Eterm. So if I build it with gcc, it SEGVs on me, and it won't compile with Studio 12. I used to use it, but don't seem to have a working version available right now.

As for providing a binary for evilvte, I'm not sure how useful that is. The only way to configure it is to build from source. And I'm pretty sure my configuration choices are likely to be different from anyone else's.

On to terminator. (Which one? This one, or that one?) The first one fails to build for me - not only does it require a non-standard make, it requires a particular version of that non-standard make, and one my regular Solaris 10 box doesn't have. The second one just gives a python traceback.

Then there's the whole rxvt family. I actually use rxvt quite a bit, when I want to run something (like top) in a throwaway terminal window, due to how lightweight it is. But it has to be said that it's nowhere near as lightweight as it was, and urxvt is much heavier than xterm. There are little irritants too, such urxvt needs its own terminfo entry adding, which is just another barrier. Or not being able to override rxvt's uses of a stipple pattern in the xterm-style scrollbar (I always make it solid).

I've tried quite a few others, most of which plain don't build.

Saturday, January 30, 2010

Looking for a better terminal

It sometimes seems I spend most of my life in xterm, usually logged into a remote server.

And while xterm isn't bad, as terminal emulators go, sometimes I want a bit more jazz and reliability. Recently xterm hasn't been as reliable as it should be, and it also doesn't cut it for some use cases - logged into a Sun (oops, Oracle...) ILOM, for instance. Things like color, background images, transparency, all break up the monotony,

Locally on my work desktop I run gnome-terminal for some stuff - it's heavy resource consumption is offset by the fact that I'm running a lot of windows. But it doesn't work half as well on remote servers or under other graphical environments. And gnome-terminal is a resource hog.

At home I run Xfce, and the Terminal is just as functional as gnome-terminal but with only half the resource requirements. One showstopper (and this is a real killer for a large number of otherwise decent applications) is its requirement for D-Bus, which makes it far too difficult to run standalone in an arbitrary environment.

So I've been looking at some alternatives, and quite like evilvte. It's functionally equivalent to gnome-terminal and the Xfce Terminal, using the same vte widget, but even lighter weight and faster and with much reduced dependencies. (In particular, no D-Bus.) So it could be used as a replacement for xterm.

Building it is somewhat manual. (But no stupid build system to fight with.) And configuration is a case of editing config.h and running make again to rebuild the binary. But I like it.

Friday, January 29, 2010

Stupid build systems

There was a time when building software was easy. You typed make and it almost always worked. Sometimes you had to know the secret incantation of xmkmf -a, but usually if make didn't work straight off the software wasn't worth using.

In the advanced days of the 21st century, of course, such simplicity - something that actually works - is rarely to be found. We have things like the autotools (the evil behind ./configure) that basically involves making a bunch of unsubstantiated guesses about your system and constructing a random build configuration based on it. And heaven help you if libtool gets its teeth into your software - it's almost guaranteed to miscompile your software in a manner that's undebuggable and unfixable.

So cmake promised to be a welcome relief from this madness. Only it's not. I've been having a go at building the awesome window manager on Solaris. When it works I'll post more details, but I almost flipped when having fought it into submission and built it successfully, I did the install and saw:

-- Removed runtime path from "/packages/awesome/bin/awesome"

And, on checking, it had done exactly that - stripped out the RPATH information so the binary it had carefully built stood no chance whatsoever of actually working.

And this is progress?

Wednesday, January 27, 2010

Solaris Containers and Shared Memory

I've been migrating some old systems from truly antique Sun boxes onto a small number of new Sun T5240s. Lots of zones, one per new physical system.

This includes some truly antique versions of oracle.

In order to keep the version police happy with the compatibility matrix, they wanted Solaris 8. Despite the fact that it works fine with Solaris 10, they insisted on Solaris 8. Enter Solaris 8 Containers.

Now, Solaris 10 has reasonable default shared memory settings. However, the Solaris 8 Container gives a faithful emulation of a Solaris 8 system, including miserly shared memory settings. How to set better values, because the normal Solaris 10 games don't apply?

The solution is simple, so simple that it never actually occurred to me. Simply put the settings you want in the Container's /etc/system file in the traditional way, and reboot the Container.

Tuesday, January 26, 2010

Preconfiguring Containers with sysidcfg

You can preconfigure a Solaris Zone by placing a valid sysidcfg file into /etc/sysidcfg in the zone, so it finds all the information when it boots.

What's also true, at least for native zones on Solaris 10, is that you can specify a cut-down sysidcfg file. For example:

network_interface=PRIMARY {
hostname=mars
}
system_locale=C
timezone=US/Pacific
name_service=NIS {
domain_name=foo.com
name_server=bar.foo.com(192.168.1.1)
}
security_policy=NONE
terminal=xterm
root_password=0123456789abcd
nfs4_domain=foo.com

If you try doing this on a Solaris 8 Container, it will go interactive. So you need to supply a bit more information. Such as:

network_interface=PRIMARY {
hostname=mars
netmask=255.255.255.0
protocol_ipv6=no
}
system_locale=C
timezone=US/Pacific
name_service=NIS {
domain_name=foo.com
name_server=bar.foo.com(192.168.1.1)
}
timeserver=localhost
security_policy=NONE
terminal=xterm
root_password=0123456789abcd

where I've added the netmask and protocol_ipv6 settings to the network_interface section, and told it a timeserver. Clearly the Solaris 10 zone can work these out for itself, but a Solaris 8 branded zone needs to be told explicitly. (Also, it doesn't need the nfs4_domain, as that makes no sense for a Solaris 8 system.)

Saturday, January 23, 2010

Compressing Backup Catalogs

Backup solutions such as NetBackup keep a catalog so that you can find a file that you need to get back. Clearly, with a fair amount of data being backed up, the catalog can become very large. I'm just sizing a replacement backup system, and catalog size (and the storage for it) is a significant part of the problem.

One way that NetBackup deals with this is to compress old catalogs. On my Solaris server it seems to use the old compress command, which gives you about 3-fold compression by the looks of it: 3.7G goes to 1.2G, for example.

However, there's a problem: in order to read an old catalog (in order to find something from an old backup) it has to be uncompressed. There's quite a delay while this happens, and even worse, you need disk space to handle the uncompressed catalog.

Playing about the other day, I wondered about using filesystem compression rather than application compression, with ZFS in mind. So, for that 3.7G sample:

CompressionSize
Application1.2G
ZFS default1.4G
ZFS gzip-1920M
ZFS gzip-9939M


Even with the ZFS default, we're doing almost as well. With gzip, we do much better. (And it's odd that gzip-9 does worse than gzip-1.)

However, even though the default level of compression doesn't compress the data quite as well as the application does, it's still much better to use ZFS to do the compression, as then you can compress all the data: if you leave it to the application then you always leave the recent data uncompressed for easy access, and only compress the old stuff. So assume a catalog twice the size above, and that we used NetBackup to compress half the catalog, then the disk used in the application case would be 3.7G uncompressed and 1.2G compressed. The total disk usage comes out as:

CompressionSize
Application4.9G
ZFS default2.8G
ZFS gzip-11.8G
ZFS gzip-91.8G


The conclusion is pretty clear: forget about getting NetBackup to compress its catalog, and get ZFS (or any other compressing filesystem) to do the job instead.