Sunday, July 31, 2016

Minimizing apache and PHP

Recently I was looking at migrating a simple website in which every page but one was static.

The simplest thing here would be to use nginx. It's simple, fast, modern, and should make it dead easy to get an A+ on the Qualys SSL test.

But that non-static page? A trivial contact form. Fill in a box, the back-end sends the content of the box as an email message.

The simplest thing here in days gone by would have been to put together a trivial CGI script. Only nginx doesn't do CGI, at least not directly. Not only that, but writing the CGI script and doing it well is pretty hard.

So, what about PHP? Now, PHP has gotten itself a not entirely favourable reputation on the security front. Given the frequent security updates, not entirely undeserved. But could it be used for this?

For such a task, all you need is the mail() function. Plus maybe a quick regex and some trivial string manipulation. All that is in core, so you don't need very much of PHP at all. For example, you could use the follwoing flags to build:

--disable-all
--disable-cli
--disable-phpdbg

So, no modules. Far less to go wrong. On top  of that, you can disable a bunch of things in php.ini:

file_uploads = Off [change]
allow_url_fopen = Off [change]
allow_url_include = Off [default]
display_errors = Off [default]
expose_php=Off [change]

Furthermore, you could start disabling functions to your heart's content:

disable_functions = php_uname, getmyuid, getmypid, passthru, leak, listen, diskfreespace, tmpfile, link, ignore_user_abord, shell_exec, dl, set_time_limit, exec, system, highlight_file, source, show_source, fpaththru, virtual, posix_ctermid, posix_getcwd, posix_getegid, posix_geteuid, posix_getgid, posix_getgrgid, posix_getgrnam, posix_getgroups, posix_getlogin, posix_getpgid, posix_getpgrp, posix_getpid, posix, _getppid, posix_getpwnam, posix_getpwuid, posix_getrlimit, posix_getsid, posix_getuid, posix_isatty, posix_kill, posix_mkfifo, posix_setegid, posix_seteuid, posix_setgid, posix_setpgid, posix_setsid, posix_setuid, posix_times, posix_ttyname, posix_uname, proc_open, proc_close, proc_get_status, proc_nice, proc_terminate, phpinfo

Once you've done that, you end up with a pretty hardened PHP install. And if all it does is take in a request and issue a redirect to a static target page, it doesn't even need to create any html output.

Then, how to talk to PHP?  The standard way to integrate PHP with nginx is using FPM. Certainly, if this was a high or even moderate traffic site, then that would be fine. But that involves leaving FPM running permanently, and is a bit of a pain and a resource hog for one page that might get used once a week.

So how about forwarding to apache? Integration using mod_php is an absolute doddle. OK, it's still running permanently, but you can dial down the process count and it's pretty lightweight. But we have a similar issue to the one we faced with PHP - the default build enables a lot of things we don't need. I normally build apache with:

--enable-mods-shared=most
--enable-ssl

but in this case you can reduce that to:

--enable-modules=few
--disable-ssl

now, there is the option of --enable-modules=none, but I couldn't actually get apache to start at all with that - some modules appear to be essential (mod_authz_host, mod_dir, mod_mime, and mod_log_config at least), and going below the "few" setting is entering unsupported territory.

You can restrict apache even further with configuration, just enable PHP, return an error for any other page, listen only on localhost. (I like the concept of the currently experimental mod_allowmethods as we might only want POST for this case. Normally disabling methods with current apache version involves mod_rewrite, which is one of the more complex modules.)

In the end, we elected to solve the problem a different way, but it was still an instructive exercise.

The above would be suitable for one particular use case. For a general service, it would be completely useless. Most providers and distributions tend to build with the kitchen sink enabled, because you don't know what your users or customers might need at runtime. They might build everything as a shared module, and could package every module in a separate package (although this ends up being a pain to manage); or you rely on the user to explicitly enable and disable modules as necessary.

In Tribblix, I've tended to avoid breaking something like apache or PHP up into multiple packages.  There's one exception, which is that the PHP interface to postgresql is split out into a separate package. This is simply because it links against the postgresql shared library, so I ship that part separately to avoid forcing postgresql to be installed as a dependency.

Saturday, July 30, 2016

Building Tribblix packages

Software in Tribblix is delivered in packages, which come from one of three sources - an illumos build, a bootstrap distribution (OpenIndiana or OpenSXCE depending on hardware architecture), and native Tribblix packages.

The illumos packages are converted from the IPS repo created during a build of illumos-gate using the repo2svr4 script in the tribblix-build repo. There's also a script ips2svr4 in the same repo that's used to construct an SVR4 package from that installed on a system using IPS packaging, such as OpenIndiana.  The OpenSXCE packages are shipped as-is.

(The use of another distro to provide components was expedient during early bootstrapping. Over time, the fraction of the OS provided by that other distribution has shrunk dramatically. At the present time, it's mostly X11.)

What of the other packages, those natively built on Tribblix?

Those are described in the build repo.

In the build repo, there are a number of top-level scripts. Key of these is dobuild, which is the primary software builder. Basically, it unpacks a source archive, runs configure, make, and make install. It can apply patches, run scripting before and after the configure step, and knows how to handle most things that are driven by autoconf.

There are some other scripts of note. The genpkg and create_pkg scripts go from a build to a package. The pkg_tarball script is an easy way to do a straight conversion of an archive to a package. There are scripts to create the package catalogs.

For each package, there is a directory named after the package, contains files used in the build.

At the very minimum, you need a pkginfo file (this is a fragment, the rpocess creates the rest of the actual pkginfo file). There's the possibility of using fixit and fixinstall scripts to fix up any errant behaviour from the make install step before actually creating packages. There are depend files listing package dependencies, and alias files listing user-friendly aliases for packages.

However, how do you know how a package was actually built? Even for packages created with the dobuild script, there are a lot of flags that could have been provided. And a lot of software doesn't fit into the configure style of build in any case.

What I actually did was have a big text file containing the commands I used to build each package. Occasionally with some very unprintable comments about some of the steps I had to take to get things to build. (So simply adding that file to the repo was never going to be a sensible way forward.)

So what I've done is split those notes up and created a file build.sh for each package, which contains the instructions used to create that package. It assumes that the THOME environment variable points to the parent of the build repo, and that there's a parallel tarballs directory containing the archives. (Many of the scripts, unfortunately, assume a certain value for that location, which is the location on my own machine. Yes, that should be fixed.)

There are a number of caveats here.

The first is that some packages don't have a build.sh file. Yet. Some of these are my own existing packages, which were built outside Tribblix. Some go back to the very earliest days, and notes as to how they were built have been lost in the mists of time - these will be added whenever that package is next built.

The second is that the build recipe was valid at the time it was last used. If you were to run the recipe now, it might not work, due to changes in the underlying system - packages are not rebuilt unless they need to be, so the recipes can go all the way back to the very first release. It might not generate the same output. (This is really autoconf, which gropes around the system looking for things it can use, so running it again might pull in additional dependencies. Occasionally this causes problems and I need to explicitly enable or disable certain features. In some cases, you have to uninstall packages to make the build run in a sane manner.)

The third is that, while the build recipe looks like a shell script, and in many cases will actually function as such, it's really a recipe that you cut and paste into a terminal. At least, that's what I do. Sometimes it's necessary, because there was some manual hacky workaround I needed that's just in the build script as a comment.

This has been an outstanding TODO item for a while now, so I'm glad to have got it out of the way.



Wednesday, June 22, 2016

Getting to grips with Docker

A while ago, I described how we took an existing application build script and managed to run it inside Docker.

Having played with this inside Docker a little more, it's probably worth scribbling down a few notes I happened to stumble across on the way.

I'm looking at having 2 basic images: as a foundation, Ubuntu with all the packages we want added; then an image that inherits FROM that with our application stack built and installed (but not configured). The idea behind this layering is simply to separate the underlying OS, which is fairly standard, from the unique stuff that is all ours.

Then, you create an instance image from the application image, simply by running a configuration script that you COPY in. Once you've got a configured application instance, you create a volume container from it, and then run the application image using the volume(s) from the instance image. You keep that volume container around, just as a home for your data, essentially forever. And you can run multiple application instances from the same base image, you just need to configure and create a volume container for each instance.

That's a brief overview of the workflow, now some tweaks and pitfalls.

We're using Ubuntu, so the first step is to run apt-get with our list of packages. This originally created a 965MB image. It's not going to be small, we need both java and a full development stack to create our application.

However, some of the stuff installed we'll never need. Using the --no-install-recommends flag to apt-get saved us about 150M. The recommends list is stuff that might be useful, but not essential. But remember - our Docker container is only ever going to run a fixed set of applications, so we'll never need any of the optional stuff. The only thing to be careful of here is if you accidentally depend on something in the recommends list without realizing you're only getting it indirectly.

We can do slightly better in terms of saving space. We use postgresql, but get it to store the database files in our own locations, so we can remove /var/lib/postgresql/9.X and what's underneath it, saving almost another 40M.

One thing to be aware of is that the list of packages in the official Ubuntu Docker image isn't quite the same as you would get from a regular Ubuntu install. There are one or two packages we didn't bother adding because they were there in a regular install that we need to add with Docker. Things like sudo and wget are on this list, so I needed to add those to the apt-get list.

Another thing to be aware of is that because you're building images afresh each time, you aren't guaranteed that new users will always get the same uid and gid. If you change the list of packages (even by just adding --no-install-recommends), this might change which users exist, and that affects the uid assigned to later users. I got burnt when a later base build ended up giving the postgres user a different uid, so it didn't own its database files on the persistent volume any more. I think the long term fix here is to create the users you need by hand before installing any packages, forcing the uid and gid to known values.

In order to keep image sizes small, you'll often see "rm -rf /var/lib/apt/lists/*" in a Dockerfile. In general, deleting temporary files is a good idea. This includes any files created by your own software deployment stage. Cleaning that up properly saved me another 200M or so in the final image. (Remember to clean up /tmp, that's part of the image too.)

It isn't strictly related to Docker, but I hit an ongoing problem - in some environments I ended up blocking on /dev/random. Search around and you'll find a lot of problems reported, especially related to java and SecureRandom (or, in our case, jruby). Running Docker on my Mac was fine, running it on a server in the cloud gave me 15-minute startup times. The solution here is to add -Djava.security.egd=file:///dev/urandom or -Djava.security.egd=file:/dev/./urandom to your java startup (or JAVA_OPTIONS).

(And, by the way, this illustrates that while Docker can guarantee that your app is the same in all environments, it doesn't magically protect you from differences in the underlying environment that can have a massive impact on your application.)

My application listens on ports 8080 and 8443, which I map on the host to the common ports, with

docker run -p 80:8080 -p 443:8443 ...

This works fine for me in testing, when I'm only running one copy and simply point a browser at the host. Networking gets a whole lot more complicated with multiple containers, although I think something like a load-balancer in front might work.

I've been using the Docker for Mac beta for some of this - while at times it's been beta in terms of stability, generally I can say it's a very impressive piece of work.

Sunday, June 19, 2016

Data Destruction and illumos

When disposing of  a computer, you would like to be sure that it has no data on its storage that could be accessed by the direct recipient (or any future recipient). It would be somewhat embarrassing for personal photos to be retrieved; it would be far worse if financial or business data were to be left accessible.

The keywords you're looking for here are data remanence and disk sanitization.

There are three methods to remove data from a disk. Total physical destruction, degaussing, and overwriting the data. The effectiveness of these methods is up for debate; as is the feasibility of a sufficiently determined and well-funded attacker being able to retrieve data.

Here I'm just going to discuss overwriting the disk. For a lot of casual and home purposes that'll be enough, and is a lot better than not bothering at all, or simply reformatting the drive (or reinstalling on OS on it) which will leave a lot of disk sectors untouched and amenable to simply being read off in software.

The standard here seems to be DBAN. However, it's not seen much activity in a while, and was sold to a company that offers a commercial product that's claimed to be much better.

Basically, all DBAN is doing is scribbling over every sector on a drive. That's not hard.

In Solarish systems, format/analyze/purge does essentially the same thing. It's the documented method for wiping hard drives on Solarish style systems.

However, it's a little fiddly to use and requires a modest level of expertise to get that far. You can't purge the disk you're booted from, the solution proposed there is to boot from installation media, drop to a shell, and run format from there. That has a couple of problems - it's still very manual, and the install (or live) media are rather large and can take an age to boot.

So I started to think, how hard could it be to create a minimalist illumos boot media that just contains the format command, and a simple script around it to make it easy to run?

I've already done most of the work, as part of the minimal viable illumos project. It was pretty easy to create a new variant.

The idea is to erase disk drives, so the intended target is physical hardware rather than a hypervisor. So I added a number of common storage drivers to the image. (As an aside, I really have no idea as to what storage HBAs are actually in common use, so which drivers to put in this list or on the Tribblix install iso is largely guesswork.)

There should be no need for networking. You really don't want a mechanism for any external access to the system while the disks are being wiped, so networking is simply not there.

And I added a simple wrapper script that enumerates disk drives and runs the appropriate format commands. If you want to see how this works, just look at the wrapper script. All this is in the mvi repo, see the files with "wipe" in their names.

And there's the (14M in size) iso image I created also available.

(Why is such a small image good, you might ask? Apart from simply being sure that it's only capable of doing the one function that it's advertised for, if you're trying to wipe a remote system mounting the image over the network, then the smaller the better.)

I tested this in VirtualBox, which exposed a few quirks. For one, the defect list switching you'll see in the docs doesn't work there (I have no idea if it's going to work on any real hardware). The other is that the disk image I was using was a file on a compressed zfs file system. The purge process writes a repeating pattern, which is very compressible, so the 1G disk image I was testing only takes up 16M of disk space.

While I don't think it's really a proper alternative to DBAN, I think it's useful as a real-world example of how to use mvi.

Thursday, June 16, 2016

Connecting to legacy Sun ILOM with modern clients

The bane of many a system administrator's existence is the remote management capability on their servers. In Sun's case, I'm talking about the ILOM.

(Of course, Sun have had RSC and ALOM and eLOM and maybe some other abominations over time.)

Now, for many purposes, you can just ssh to the ILOM and you're done. On Sun boxes anyway, where you often have serial console redirection and the OS using the serial console.

However, if you want to manage the system fully, you need a proper client. There are a couple of common cases. First, if you need the VGA console (either for a broken OS, or to interact with the BIOS), or if you want to do storage redirection (in other words, you want to remotely present a bootable image).

That's where the fun starts, and you get in a tangled relationship with Java. Often, it ends up being a tale of woe.

And that's on the best of days. With legacy hardware - such as the X4150 - it gets a whole lot more interesting.

Now, while the X4150 is legacy and well past end of life now, it turns out that there was an updated firmware release in 2015. (For POODLE, I think.) If you can, apply this, as it should fix some of the UI compatibility issues with newer browsers. (Not all, I suspect, but if you've tried using a current browser and only got half the GUI then you know what I'm talking about.)

However, that doesn't necessarily mean that the Java application is going to work. There are actually a couple of issues here.

The first is that the application is a signed jar, and the certificate used to sign it has expired. Worse, due to Java's rather chequered security history, current versions have draconian checks in place which you'll run into. To fix, go to the Java Control Panel, down to "Perform signed code verification checks" and change it to "Do not check". Generally, disabling security like this is a bad idea, but in this case it's necessary.

Next, if you start up the application, click through the remaining security dialogs, and try to connect to the console, you'll get a cipher suite mismatch failure. The ILOM is pretty old, and uses SSLv3 which is disabled by default in current Java. You'll need to edit the java.security file (in ${JAVA_HOME}/jre/lib/security/java.security[*]) and comment out two lines - the ones with jdk.certpath.disabledAlgorithms and jdk.tls.disabledAlgorithms, then run the application again.

With luck, that will at least enable you to get to the console.

If you want storage redirection, then you're in for more fun. For starters, you need to be running Solaris, Linux, or Windows. If you're on a Mac, it's not going to work. You'll need to get yourself another machine, or run a VM with something else installed.

And the other thing is that you need to be running a 32-bit Java Virtual Machine. If you're running Solaris, this rules out Java 8 - you'll have to go back to Java 7. On other platforms, you'll have to make sure you have a 32-bit JVM, which might not be the default and you might have to manually install it.

Oh, and if you're on Linux or Solaris and running OpenJDK (rather than the Oracle builds) then you'll need IcedTea to get the javaws integration. At least with IcedTea you can ignore the Java Control Panel stuff.

[*: On my Mac I discovered that I had 2 different installations of Java. The one that you get if you type "java" isn't the same one used for browser integration and javaws launching. Running /usr/libexec/java_home gave me the wrong one; I ended up looking at the ps output when running the Control Panel to find out the location of the one I really needed.]

Monday, June 06, 2016

What's present in libc on illumos?

Over time, operating systems such as illumos gain new functionality. For examples, new functions are added to libc. But how do you know what's there, when was it added, and whether a newer version contains something that's missing?

So perhaps the first place to start is with elfdump. If you run

elfdump -v /lib/libc.so

then you'll get a bunch of lines like so:

     index  version                     dependency
       [1]  libc.so.1                                        [ BASE ]
       [2]  ILLUMOS_0.13                ILLUMOS_0.12        
       [3]  ILLUMOS_0.12                ILLUMOS_0.11        
       [4]  ILLUMOS_0.11                ILLUMOS_0.10        
 ...

      [51]  SUNW_0.8                    SUNW_0.7            
      [52]  SUNW_0.7                    SYSVABI_1.3         
      [53]  SYSVABI_1.3                                     
      [54]  SUNWprivate_1.1                                 

So, these lines correspond to the various released versions of libc. Every time you add a new function to libc, that means a new version of the library. And each version depends on the one before.

These versions are listed in a mapfile, at usr/src/lib/libc/port/mapfile-vers in the illumos source. So, what you can see here is that ILLUMOS_0.13 (which is the version shipped with Tribblix 0m16) is where eventfd got added. If you want strerror_l (which you do if you're building vlc), then you need ILLUMOS_0.14; if you want pthread_attr_get_np (which you need for QT5) then you need ILLUMOS_0.21. (Unfortunately, Tribblix 0m17 only picks up ILLUMOS_0.19.) Looking back, you can see everything exposed by libc and what version of Solaris or illumos it was added in.

Another trick is to run elfdump against a binary. For example:

elfdump -v /usr/gnu/bin/tar

will tell us which versions of which libraries are required. Part of this is:

Version Needed Section:  .SUNW_version
     index  file                        version
       [2]  libnsl.so.1                 SUNW_0.7            
       [3]  libc.so.1                   ILLUMOS_0.8         
       [4]                              ILLUMOS_0.1          [ INFO ]
       [5]                              SUNW_1.23            [ INFO ]
       [6]                              SUNW_1.22.6          [ INFO ]

This is quite informative. It tells you that  gtar calls the newlocale stuff from ILLUMOS_0.8, and a bunch of older stuff. But the point here is that it is calling illumos additions to libc, so this binary won't work on Solaris 10 (and probably not on Solaris 11 either). If you're building binaries for distribution across distros, you can use this information to confirm that you haven't accidentally pulled in functions that might not be available everywhere.


Wednesday, May 18, 2016

Installing Tribblix into an existing pool

The normal installation method for Tribblix is the live_install.sh script.

This creates a ZFS pool for installation into, creates file systems, copies the OS, adds packages, makes a few customizations, installs the bootloader, and not much else. It's designed to be simple.

(There's an alternative, to install to a UFS file system. Not much used, but it's kept just to ensure no insidiuous dependencies creep into the regular installer, and is useful for people with older underpowered systems.)

However, if you've already got an illumos distro installed, then you might already have a ZFS pool, and it might also have some useful data you would rather not wipe everything out. Is there not a way to create a brand new boot environment in the existing pool and install Tribblix to that, preserving all your data?

(Remember, also, that  ZFS encourages the separation of OS and data. So you should be able to replace the OS without disturbing the data.)

As of Tribblix Milestone 17, this will work. Booting from the ISO and logging in, you'll find a script called over_install.sh in root's home directory. You can use that instead of live_install.sh, like so:

./over_install.sh -B rpool kitchen-sink

You have to give it the name of the existing bootable pool, usually rpool. It will do a couple of sanity checks to be sure this pool is suitable, but will then create a new BE there and install to that.

Arguments after the pool name are overlays, specifying what software to install, just like the regular install.

It will update grub for you, so that you have a grub on the pool that is compatible with the version of illumos you've just added. With -B, it will update the MBR as well.

It copies some files, the minimum that define the system's identity, from the existing bootable system into the new BE. This basically copies across user accounts (group, passwd, shadow files) and the system's ssh keys, but nothing else.

Any existing zfs file systems are untouched, and will be present in the new system. You'll have to import any additional zfs pools, though.

When you boot up after this, the grub menu will just contain the new BE you just created. However, any old boot environments are still present, so you can still see them, and manipulate them, using beadm. In particular, you can mount up and old BE (in case there are important files you need to get back), and activate an old BE so you can boot into the old system if so desired.

Amongst other things, you can use this as a recovery tool, when your existing system has a functioning root pool but won't boot.

I've also used this to "upgrade" older Tribblix systems. While there is an upgrade mechanism, this really only works (a) for very recent releases, and (b) to update one release at a time. With this new mechanism, I can simply stick a new copy of Milestone 17 on an old box, and enjoy the new version while having all my data intact.