Friday, March 04, 2016

Supermicro - illumos compatible server configuration

We were recently in the market for some replacement servers. We run OmniOS in production, so something compatible with illumos is essential.

This is a little more tricky than it appears. Some of the things to be aware of are specific to illumos, while some are more generic. But, after spending some time reading an awful lot of spec sheets and with the help of the OmniOS mailing list, we got to a final spec that ought to be pretty good.

I'm going to talk about a subset of Supermicro systems here. While other vendors exist, it can be harder to put together a working spec.

To start with, Supermicro have a bewildering list of parts. But we started out looking at 2U storage servers, with plenty of 2.5" disk slots in the front for future expansion.

Why 2.5"? Well, it allows you twice as many slots (24 as opposed to 12) so you have more flexibility. Also, the industry is starting to move away from 3.5" drives to 2.5", but that's a slow process. More to the point, most SSDs come in the 2.5" form factor, and I was planning to go to SSDs by default. (It's a good thing now, I'm thinking that choosing spinning rust now will look a bit daft in a few years time.) If you want bulk on the cheap, then something with 3.5" slots that you can put 4TB or larger SAS HDDs in might be better, or something like the DataON 1660 JBOD.

We're also looking at current motherboards. That means the X10 at this point. The older X9 are often seen in Nexenta configurations, and those will work too. But we're planning for the long haul, so want things to not turn into antiques for as long as possible.

So there is a choice of chassis. These are:
  • 216 - 24 2.5" drive slots
  • 213 - 16 2.5" drive slots
  • 826 - 12 3.5" drive slots
  • 825 - 8 3.5" drive slots
The ones with smaller numbers of drives have space for something like a CD.

The next thing that's important, especially for illumos and ZFS, is whether it's got a direct attach backplane or whether it puts the disks behind an expander. Looking at the 216 chassis, you can have:
So you can see that something with A or AC is direct attach, E1C is a single expander, E2C is a dual expander. (With a dual expander setup you can use multipathing.)

Generally, for ZFS, you don't want expanders. Especially so if you have SATA drives - and many affordable SSDs are SATA. Vendors and salespeople will tell you that expanders never cause any problems, but most illumos folk appear allergic to the mere mention of them.

(I think this is one reason Nexenta-compatible configurations, and some of the preconfigured SuperServer setups, look expensive at first sight. They often have expanders, use exclusively SAS drives as a consequence, and SAS SSDs are expensive.)

So, we want the SC216BAC-R920LPB chassis. To connect up 24 drives, you'll need HBAs. We're using ZFS, so don't need (or want) any sort of hardware raid, just direct connectivity. So you're looking at the LSI 9300-8i HBA, which has 8 internal ports, and you're going to need 3 of them to connect all 24 drives.

For the motherboard, the X10 has a range of models. At this point, select how many and what speed network interfaces you want.
The dual network boards have 16 DIMM slots, the quad network boards have 24 DIMM slots. The network cards are Intel i350 (1Gbe) or X540 (10Gbe) which are both supported by illumos.

The 2U chassis can support a little 2-drive disk bay at the back of the machine, you can put a pair of boot drives in here and wire them up directly to the SATA ports on the motherboard, giving you an extra 2 free drive slots in the front. Note, though, that this is only possible with the dual network boards, the quad network boards take up too much room in the chassis. (It's not so much the extra network ports as such, but the extra DIMM slots.)

Another little quirk is that as far as I can tell the quad 1G board has fewer USB ports, and they're all USB3. You need USB2 for illumos, and I'm not sure if you can demote those ports down to USB2 or not.

So, if you want 4 network ports (to provide, for example, a public LACP pair and a private LACP pair), you want the X10DRi-T4+.

Any E5-2600 v3 CPUs will be fine. We're not CPU bound so just went for cheaper lower-power chips, but that's a workload thing. One thing to be aware of is that you do need to populate both sockets - if you just have one then you lose half of the DIMM slots (which is fairly obvious) and most of the PCI slots (which isn't - look at the documentation carefully if you're thinking of doing this, but better to get a single-socket motherboard in the first place).

As for drives, we went for a pair of small Intel S3510 for the boot drives, those will be mirrored using ZFS. For data, larger Intel S3610, as they support more drive writes - analysis of our I/O usage indicated that we are worryingly close to the DWPD (Drive Writes Per Day) of the S3510, so the S3610 was a safer choice, and isn't that much more expensive.

Hopefully I'll be able to tell you how well we get on once they're delivered and installed.

No comments: