I can time two parts of the install. The first is the actual Solaris installation, from the begin script to the finish script. This is only a part of the installation process, but is easy to measure. The second is my local installation, which takes place on the next boot, and installs some extra packages, runs some cleanup scripts, untars the whole of /opt/sfw, and applies current patches, and includes the reboot time. I've done 4 different systems this week, and the times (in minutes) are shown below:
Type | cpu speed | install | localinstall |
---|---|---|---|
Ultra 60 | 2x360MHz | 45 | 73 |
V240 | 2x1.5GHz | 16 | 40 |
V440 | 4x1.28GHz | 18 | 34 |
T2000 | 8x1.0GHz | 36 | 60 |
OK, so the install time isn't necessarily a good metric, but it's probably a fair indication of how long it's going to take to do general system administration on such a system. It's also essentially serial, which isn't good for the T1 chip. Even so, the numbers here are slightly disappointing - it's doing slightly worse for it's clock speed compared to the other sparc systems I've got available to play with today.
2 comments:
Don't you think it's IO limited rather than CPU limited? How many hard drives were in each server and how fast are those drives?
There's obviously some IO involved, but the install involves a lot of cpu (uncompressing the bzipped cpio archives is a big loss), and so does the patching phase.
Besides, it's not really relevant. Each machine is installing onto a single internal drive at this point. If IO was an issue, then it would indicate that the fancy new SAS drives are dismally poor, which I hope isn't the case.
Post a Comment