Seems like Xmas has come early.
Of course, Sun go on at length about the Java Enterprise System. Now, this is interesting in parts, but JES is a complicated beast and likely to be of interest primarily to - well - Enterprises.
What I like, though, is the promise of free stuff a bit further down. There has been a good emphasis on developers recently - Studio 11 and Creator, for example. But what's also now promised is free versions of Tarantella and SunRay, which are likely to be of interest to a far wider range of customers.
And as I read it, the nebulous N1, including Sun Management Center, is included in the deal too.
Wednesday, November 30, 2005
Tuesday, November 29, 2005
suspend/resume at a crawl
I've got a Sun Blade 1500 at home (one of the old red ones). Works great.
Apart from suspend/resume, that is.
I have no idea why, but both suspend and resume take an absurd amount of time. The suspend isn't too bad (slower than it should be), but resume is in the 5-10 minute range. To use an Americanism, this sucks.
(It's doubly odd because I've tried this on a Blade 150, and that's much, much quicker.)
Apart from suspend/resume, that is.
I have no idea why, but both suspend and resume take an absurd amount of time. The suspend isn't too bad (slower than it should be), but resume is in the 5-10 minute range. To use an Americanism, this sucks.
(It's doubly odd because I've tried this on a Blade 150, and that's much, much quicker.)
Get it right first time!
Many years ago I wrote a simple system and network monitoring tool. It's been developed on and off over the years, but has now reached a major impasse.
Basically, I designed it wrong 10 years ago. I started out with a 2 state system. If the status is 0, then it's fine. If the status is 1, it's broken and needs fixing. Sounds reasonable, right?
Then I realized I needed to add another state, so I defined it so that if the status is 2, there's a warning condition. And all worked well for a few years.
The problem with this scheme is that the severity of the problem isn't a linear function of the status. So I end up playing all sorts of games trying to analyze the status codes trying to work out just how bad the situation really is. It would be much easier if I could simply retrieve the maximum status out of the database - no fiddling required! And I can order problems simply by sorting on the status.
Thinking about this a bit more, this is the obvious thing to do. So obvious, in fact, that I was a dullard for not thinking about this at the start. (But, when I started writing this particular monitoring tool, I wasn't thinking about what version 3 would look like 10 years down the line. And I started out by using the return code from scripts as the status, which is where 0 and 1 came from.)
Of course, I now have to consider what the best scheme might be. Do I simply have 0 for good, 1, for warning, 2 for dead? I think the 0 for good is fine. But should I do something like 255 for dead, 128 for warning, leaving me some room to add finer levels of granularity in the future?
Decisions, decisions...
Basically, I designed it wrong 10 years ago. I started out with a 2 state system. If the status is 0, then it's fine. If the status is 1, it's broken and needs fixing. Sounds reasonable, right?
Then I realized I needed to add another state, so I defined it so that if the status is 2, there's a warning condition. And all worked well for a few years.
The problem with this scheme is that the severity of the problem isn't a linear function of the status. So I end up playing all sorts of games trying to analyze the status codes trying to work out just how bad the situation really is. It would be much easier if I could simply retrieve the maximum status out of the database - no fiddling required! And I can order problems simply by sorting on the status.
Thinking about this a bit more, this is the obvious thing to do. So obvious, in fact, that I was a dullard for not thinking about this at the start. (But, when I started writing this particular monitoring tool, I wasn't thinking about what version 3 would look like 10 years down the line. And I started out by using the return code from scripts as the status, which is where 0 and 1 came from.)
Of course, I now have to consider what the best scheme might be. Do I simply have 0 for good, 1, for warning, 2 for dead? I think the 0 for good is fine. But should I do something like 255 for dead, 128 for warning, leaving me some room to add finer levels of granularity in the future?
Decisions, decisions...
Saturday, November 26, 2005
Another JKstat update
I've updated JKstat to version 0.08.
It's getting better. The accessory widgets have been cleaned up and a couple of new ones added (distribution of packet sizes on bge interfaces, and dma transfer rate on ifb graphics cards). Rates are now accurately computed based on the actual snaptime, rather than approximately based on the intended refresh interval. A couple of internal changes streamline the whole system. And I've fixed it so that actually enumerating the kstats doesn't blindly read all the data, which improves performance.
With these changes, I'm much happier that it's closing in on its design goals. I was tempted to bump the version up to 0.1, but that would probably be premature based on the number of bugs that I introduced and fixed recently.
My next idea is to build a graphical iostat. Why is this of value? Well, pictures tell you a lot - the eye is very good at interpreting graphical data. You can dynamically hide uninteresting data, or expand areas of interest for a finer view (for example, you could dynamically expand a disk's I/O to show partition data). You can show historical rates, and generally have multiple views of the same data. You can use the gui to show additional context-sensitive data beyond the basic I/O data. And you could, in the future, link to other areas of functionality - such as dtrace to show what was causing all that I/O in the first place.
It's getting better. The accessory widgets have been cleaned up and a couple of new ones added (distribution of packet sizes on bge interfaces, and dma transfer rate on ifb graphics cards). Rates are now accurately computed based on the actual snaptime, rather than approximately based on the intended refresh interval. A couple of internal changes streamline the whole system. And I've fixed it so that actually enumerating the kstats doesn't blindly read all the data, which improves performance.
With these changes, I'm much happier that it's closing in on its design goals. I was tempted to bump the version up to 0.1, but that would probably be premature based on the number of bugs that I introduced and fixed recently.
My next idea is to build a graphical iostat. Why is this of value? Well, pictures tell you a lot - the eye is very good at interpreting graphical data. You can dynamically hide uninteresting data, or expand areas of interest for a finer view (for example, you could dynamically expand a disk's I/O to show partition data). You can show historical rates, and generally have multiple views of the same data. You can use the gui to show additional context-sensitive data beyond the basic I/O data. And you could, in the future, link to other areas of functionality - such as dtrace to show what was causing all that I/O in the first place.
Thursday, November 24, 2005
Bumps-a-daisy!
Had a bit of a problem yesterday. While driving to work I got bumped hard from behind, in stop-start traffic on the A1(M).
Nothing that serious - nobody was hurt, which is what really matters. The other car was a total wreck, and the rear-end of my Toyota is pretty well squashed. It's not so bad that it can't be repaired, so in a week or so it goes into the body shop and should be all fixed again ready for Christmas.
I have to say, though, that the insurance company aren't exactly covering themselves in glory here. I mean, they presumably deal with this sort of thing on a daily basis, but they do seem to be making heavy weather of it.
Nothing that serious - nobody was hurt, which is what really matters. The other car was a total wreck, and the rear-end of my Toyota is pretty well squashed. It's not so bad that it can't be repaired, so in a week or so it goes into the body shop and should be all fixed again ready for Christmas.
I have to say, though, that the insurance company aren't exactly covering themselves in glory here. I mean, they presumably deal with this sort of thing on a daily basis, but they do seem to be making heavy weather of it.
Saturday, November 19, 2005
Properly connected
For a long time now, we've had a broadband connection to the internet, but we only had the main home PC hooked up.
No longer! I'm typing this from one of my home Sparc machines running Solaris.
What took me so long I'm not sure, but I finally went and ordered a little cable router (a non-wireless version, which seem to have vanished completely from the shops in favour of wireless models which cost twice as much and I can't take advantage of). Put in the Setup CD, follow the instructions, and it was working. Connect up my Sun, tell it to use dhcp, and I'm online.
I love it when things just work!
Now to find some really long cables to connect the machines upstairs...
No longer! I'm typing this from one of my home Sparc machines running Solaris.
What took me so long I'm not sure, but I finally went and ordered a little cable router (a non-wireless version, which seem to have vanished completely from the shops in favour of wireless models which cost twice as much and I can't take advantage of). Put in the Setup CD, follow the instructions, and it was working. Connect up my Sun, tell it to use dhcp, and I'm online.
I love it when things just work!
Now to find some really long cables to connect the machines upstairs...
Friday, November 18, 2005
Quest for small server continues...
I'm still working on my quest for a small reliable server.
I was just reading Richard Elling's blog entry on RAS and the X4100/X4200 servers. You should read this - blindingly obvious design features like not putting heat generators like disks in the airflow path for the CPUs. But he also says that most thin servers don't need more than 2 disks. Perhaps this is why I'm having so much trouble finding a server to fit my requirements!
Oh well, having exhausted Sun's catalog, I'm now looking at the likes of the HP DL385 or the Dell PE2850. Both of these are listed in the HCL, which is pretty much essential as I would naturally be running Solaris on the machine.
I was just reading Richard Elling's blog entry on RAS and the X4100/X4200 servers. You should read this - blindingly obvious design features like not putting heat generators like disks in the airflow path for the CPUs. But he also says that most thin servers don't need more than 2 disks. Perhaps this is why I'm having so much trouble finding a server to fit my requirements!
Oh well, having exhausted Sun's catalog, I'm now looking at the likes of the HP DL385 or the Dell PE2850. Both of these are listed in the HCL, which is pretty much essential as I would naturally be running Solaris on the machine.
PostgreSQL, Sun, and Integration
Sun sure are busy with the announcements this week.
With the PostgreSQL announcement (and they don't seem to have mastered the spelling of PostgreSQL, it seems), Sun are offering to integrate and support PostgreSQL into Solaris.
Now, this has to be a good thing for both Sun and PostgreSQL. But does it help me?
I'm not really sure that integration does help me. Note that it's the integration - or bundling - that I have a problem with, not Sun supporting it or optimizing it or just supplying it.
Sun already bundle the Apache web server and Tomcat. I spend quite a lot of time setting up web servers, usually using Apache and Tomcat, often with other components (including, as it happens, PostgreSQL on occasions). And I never use the bundled versions that Sun supply. And the reason it quite simple - Sun's versions aren't the right versions, aren't set up the way I need, and are installed in the wrong place. It's much easier and safer to just install them yourself and you know exactly how they're set up and that they're going to work exactly the way you want, and that you can upgrade to the latest version at any time of your choosing.
Integration really ties you up in knots. Solaris comes with ancient versions of Gnome, and because they're integrated we're stuck with them. Not only that, because it's integral with Solaris 10, we can't apply the same version to our Solaris 9 or 8 machines. Integration locks application update to OS updates, and everybody loses.
What I want - and Sun need - is the ability to choose between sticking with a given version, or going to a new version. This requires that products such as Gnome/JDS (and the same argument applies to anything else, like Mozilla, OpenSSL, Apache, even Java) are unbundled and separated from the core OS. Then, I can select whether I want to stay with Gnome 2.6 (as in the version that comes with JDS on Solaris 10) or have a Gnome 2.12 desktop instead.
Likewise for PostgreSQL, which started this whole blog entry. For different applications, I'm going to have to support different versions - maybe on the same physical machine (using zones, for example). I need the ability to make that choice independent of the underlying OS version, otherwise you end up with an upgrade nightmare.
With the PostgreSQL announcement (and they don't seem to have mastered the spelling of PostgreSQL, it seems), Sun are offering to integrate and support PostgreSQL into Solaris.
Now, this has to be a good thing for both Sun and PostgreSQL. But does it help me?
I'm not really sure that integration does help me. Note that it's the integration - or bundling - that I have a problem with, not Sun supporting it or optimizing it or just supplying it.
Sun already bundle the Apache web server and Tomcat. I spend quite a lot of time setting up web servers, usually using Apache and Tomcat, often with other components (including, as it happens, PostgreSQL on occasions). And I never use the bundled versions that Sun supply. And the reason it quite simple - Sun's versions aren't the right versions, aren't set up the way I need, and are installed in the wrong place. It's much easier and safer to just install them yourself and you know exactly how they're set up and that they're going to work exactly the way you want, and that you can upgrade to the latest version at any time of your choosing.
Integration really ties you up in knots. Solaris comes with ancient versions of Gnome, and because they're integrated we're stuck with them. Not only that, because it's integral with Solaris 10, we can't apply the same version to our Solaris 9 or 8 machines. Integration locks application update to OS updates, and everybody loses.
What I want - and Sun need - is the ability to choose between sticking with a given version, or going to a new version. This requires that products such as Gnome/JDS (and the same argument applies to anything else, like Mozilla, OpenSSL, Apache, even Java) are unbundled and separated from the core OS. Then, I can select whether I want to stay with Gnome 2.6 (as in the version that comes with JDS on Solaris 10) or have a Gnome 2.12 desktop instead.
Likewise for PostgreSQL, which started this whole blog entry. For different applications, I'm going to have to support different versions - maybe on the same physical machine (using zones, for example). I need the ability to make that choice independent of the underlying OS version, otherwise you end up with an upgrade nightmare.
Thursday, November 17, 2005
Deluged by good stuff
Whole load of interesting stuff coming out of Sun at the moment.
It seems that free stuff isn't just for Fridays anymore. We now get free developer tools. This is something I ought to try. I have to confess to being one of the old-school who can't see what on earth is wrong with emacs, but I'm always keen to try new things, and maybe an IDE might help out.
Then we get snippets about the new Niagara chip. Marketing have clearly got in on this one ("CoolThreads", anyone?, and there does seem to be a certain greenness in the positioning, but this looks like some serious technology.
More free stuff - the Studio 11 compilers. Making these free was inevitable, really - Studio has been free to anyone in the OpenSolaris community for some time now, and I've long felt that the excessive pricing for Studio was crippling its uptake. Good one!
Is that enough? No way! Something really big happened this week - ZFS was unleashed on the world. Sure, it got hyped (overhyped) a year ago when Solaris 10 got announced, but ZFS is the real deal - I've been privileged to have been testing it for almost a year and a half, and it does what it says on the tin. So go check out all the blogs.
It seems that free stuff isn't just for Fridays anymore. We now get free developer tools. This is something I ought to try. I have to confess to being one of the old-school who can't see what on earth is wrong with emacs, but I'm always keen to try new things, and maybe an IDE might help out.
Then we get snippets about the new Niagara chip. Marketing have clearly got in on this one ("CoolThreads", anyone?, and there does seem to be a certain greenness in the positioning, but this looks like some serious technology.
More free stuff - the Studio 11 compilers. Making these free was inevitable, really - Studio has been free to anyone in the OpenSolaris community for some time now, and I've long felt that the excessive pricing for Studio was crippling its uptake. Good one!
Is that enough? No way! Something really big happened this week - ZFS was unleashed on the world. Sure, it got hyped (overhyped) a year ago when Solaris 10 got announced, but ZFS is the real deal - I've been privileged to have been testing it for almost a year and a half, and it does what it says on the tin. So go check out all the blogs.
Monday, November 14, 2005
Turning Opteron Down
I was putting together a server spec recently. Nothing special, just a reliable box to store 100G or so of data safely and serve it up via the web.
Easy, right?
Well, that's what I thought, and I was wrong.
I have this thing about real servers. They have to have redundant PSUs, redundant disk - mirrored. This means greater than 2 internal drives.
(Note that, according to this definition, Sun's SF280R, V210, X2100, E220R, E420R, V480, V490, V20z, and E1280 don't qualify. All are limited by 2 internal drives. They're fine for compute nodes and similar tasks, where the aim is simply to survive long enough to finish the job and decommission the node, but not for real servers. You have to have at least 3 disks to guarantee survival - and reboot - after a disk goes. OK, so you're supposed to add external arrays, but usually you can't do anything like place metadevice databases on the arrays. And also, only having 2 drives make Live Upgrade harder than need be. End of first rant.)
OK, so the next thing is that 100G of storage. It doesn't really justify getting an external array - that's fine for a terabyte, but would be a waste in this case. And, unlike something like 10G you can't just lose it on the boot drives. So 100G is an interesting number.
Grabbing 100G off a SAN doesn't look promising either. Apart from not having one to hand right now, the cost of the HBAs makes a nonsense of it for this amount of data.
So, what else? iSCSI could be interesting, as it saves you the cost of the HBAs. But it's not really mature yet, and I don't happen to have a server handy. (I don't happen to have a convenient NFS server either, which is a shame.)
OK. So the next best thing is to get a box with 4 drives - 2 to house the OS and the application binaries, and a couple extra 146G drives for the data.
So, I start of by thinking - these Sun Opteron boxes look real nice. Particularly the X4100, which can take 4 drives without the DVD. (And you don't need a DVD - it's just something else to waste money and electricity.) However, this won't work. Sun only offer 36G or 73G drives. Not enough! And there isn't a slightly bigger variant that takes more drives. OK, so Sun don't make an Opteron box that will work. Bother.
So, go to Sparc. The V240 works a treat. I like the V240. A couple of boot drives and a couple extra 146G drives and I'm all set. It's interesting that an old Sparc box is better suited than a new Opteron box.
(Not that the V240 is perfect. In the same way that it's a major disappointment that the Opteron boxes don't take 146G drives, it's disappointing that the V240 doesn't support 300G drives. Why don't Sun realize that customers want choice?)
OK, so I'm a Solaris fan, and Solaris x86 runs on a wide range of systems. A quick browse through other manufacturers websites (and some of them are nowhere near as easy to navigate as they ought to be) shows that this trend of useless system design is fairly widespread. Other manufacturers are more agile at supporting larger drive capacities, but the systems designs are similar.
In the end I decided to simply park the problem in a zone on a bigger system. It's a good solution, and was what I wanted to do anyway.
What is intriguing is that Sun used to have ideal systems for this sort of task, and have now scrapped them. The V60x allowed you to have 3 drives, so you could avoid the twin-drive trap. The V65x was a wonderful compact server and let you put 6 drives in. The V250 let you put 8 drives in the chassis, but seemed to get canned pretty quickly. It's not entirely obvious to me that genuine progress is being made.
Easy, right?
Well, that's what I thought, and I was wrong.
I have this thing about real servers. They have to have redundant PSUs, redundant disk - mirrored. This means greater than 2 internal drives.
(Note that, according to this definition, Sun's SF280R, V210, X2100, E220R, E420R, V480, V490, V20z, and E1280 don't qualify. All are limited by 2 internal drives. They're fine for compute nodes and similar tasks, where the aim is simply to survive long enough to finish the job and decommission the node, but not for real servers. You have to have at least 3 disks to guarantee survival - and reboot - after a disk goes. OK, so you're supposed to add external arrays, but usually you can't do anything like place metadevice databases on the arrays. And also, only having 2 drives make Live Upgrade harder than need be. End of first rant.)
OK, so the next thing is that 100G of storage. It doesn't really justify getting an external array - that's fine for a terabyte, but would be a waste in this case. And, unlike something like 10G you can't just lose it on the boot drives. So 100G is an interesting number.
Grabbing 100G off a SAN doesn't look promising either. Apart from not having one to hand right now, the cost of the HBAs makes a nonsense of it for this amount of data.
So, what else? iSCSI could be interesting, as it saves you the cost of the HBAs. But it's not really mature yet, and I don't happen to have a server handy. (I don't happen to have a convenient NFS server either, which is a shame.)
OK. So the next best thing is to get a box with 4 drives - 2 to house the OS and the application binaries, and a couple extra 146G drives for the data.
So, I start of by thinking - these Sun Opteron boxes look real nice. Particularly the X4100, which can take 4 drives without the DVD. (And you don't need a DVD - it's just something else to waste money and electricity.) However, this won't work. Sun only offer 36G or 73G drives. Not enough! And there isn't a slightly bigger variant that takes more drives. OK, so Sun don't make an Opteron box that will work. Bother.
So, go to Sparc. The V240 works a treat. I like the V240. A couple of boot drives and a couple extra 146G drives and I'm all set. It's interesting that an old Sparc box is better suited than a new Opteron box.
(Not that the V240 is perfect. In the same way that it's a major disappointment that the Opteron boxes don't take 146G drives, it's disappointing that the V240 doesn't support 300G drives. Why don't Sun realize that customers want choice?)
OK, so I'm a Solaris fan, and Solaris x86 runs on a wide range of systems. A quick browse through other manufacturers websites (and some of them are nowhere near as easy to navigate as they ought to be) shows that this trend of useless system design is fairly widespread. Other manufacturers are more agile at supporting larger drive capacities, but the systems designs are similar.
In the end I decided to simply park the problem in a zone on a bigger system. It's a good solution, and was what I wanted to do anyway.
What is intriguing is that Sun used to have ideal systems for this sort of task, and have now scrapped them. The V60x allowed you to have 3 drives, so you could avoid the twin-drive trap. The V65x was a wonderful compact server and let you put 6 drives in. The V250 let you put 8 drives in the chassis, but seemed to get canned pretty quickly. It's not entirely obvious to me that genuine progress is being made.
JKstat updated
After a long hiatus, I've released an updated version of JKstat.
For those who don't know what a kstat is, it's a Solaris kernel statistic. There are a lot of these, and they give you an awful lot of information about what you Solaris system is doing.
JKstat allows you to get at the kstats from a Java application. (Solaris already includes a fabulous perl implementation. One day, I hope JKstat will be as good. It isn't yet.)
The kstats naturally form a tree structure, and I've written a graphical browser that allows you to go through the kstat tree. Like so:
(Oh dear, what has blogger done to my beautiful image? Oh well - click on it and you'll see the real thing!)
There's still a lot to be done. For one thing, I want to actually create a decent API rather than the horrible kludge that I'm using at the moment (it's this, rather than any lack of maturity or functionality, that keeps the version number at a lowly 0.07). And there are a number of existing tools that could be enhanced by a decent graphical user interface (in particular, the ability to dynamically expand or compress certain features - imagine iostat with the ability to zoom in on specific disks and show or hide the partition data on the fly). Looking further ahead, one can imagine integration with dtrace to answer the question "what is causing this activity?".
Enjoy, and if you have any comments (and, in particular, you would like a graphical display of a particular kstat, or can think of novel and useful ways of displaying kstat data) I would love to hear them.
For those who don't know what a kstat is, it's a Solaris kernel statistic. There are a lot of these, and they give you an awful lot of information about what you Solaris system is doing.
JKstat allows you to get at the kstats from a Java application. (Solaris already includes a fabulous perl implementation. One day, I hope JKstat will be as good. It isn't yet.)
The kstats naturally form a tree structure, and I've written a graphical browser that allows you to go through the kstat tree. Like so:
(Oh dear, what has blogger done to my beautiful image? Oh well - click on it and you'll see the real thing!)
There's still a lot to be done. For one thing, I want to actually create a decent API rather than the horrible kludge that I'm using at the moment (it's this, rather than any lack of maturity or functionality, that keeps the version number at a lowly 0.07). And there are a number of existing tools that could be enhanced by a decent graphical user interface (in particular, the ability to dynamically expand or compress certain features - imagine iostat with the ability to zoom in on specific disks and show or hide the partition data on the fly). Looking further ahead, one can imagine integration with dtrace to answer the question "what is causing this activity?".
Enjoy, and if you have any comments (and, in particular, you would like a graphical display of a particular kstat, or can think of novel and useful ways of displaying kstat data) I would love to hear them.
Sunday, November 06, 2005
Everyone an administrator?
James Dickens asks: Why not a server?
And it's an interesting question. Why, in a house with multiple computers, do you not have a dedicated machine somewhere and store all your files on it? It makes a lot more sense than having files spread at random amongst all those machines.
My own solution to this is a portable USB zip drive. I use this to carry stuff about between my machines, and between work and home if needed.
I'm not sure that the suggestion of using a real computer (and an Ultra 2 certainly qualified as a real computer) as the server though. Yes, I know that Solaris is an absolute doddle to administer (yes, really - once you've got to know it). But it's bad enough that everyone owning a Windows PC has to be a systems administrator, without expanding that even further. Even though it pays my wages, I'm a firm believer that when it comes to systems administration, less is definitely better.
Using a general-purpose computer doesn't necessarily make sense to me. (The one situation where it really comes into its own is if you were to use it as something like a Sun Ray Server.) But generally, some sort of appliance seems to make more sense.
And an appliance running a cut down OpenSolaris with ZFS would be a stunner.
The only downside to a server is that part of the assumption is that it's always on. I'm not sure that we should be encouraging that and the accompanying waste of power when there's so much damage already being done to the environment.
Hosted services - grid, if you like - also have limitations that are painful. They neatly solve the administration, availability, and backup problems, though. The biggest problem I see is that upload speeds on my internet connection are absolutely pathetic. Most internet connectivity is highly asymmetric - fast download, with just enough bandwidth the other way to handle the administrative packets and not a lot more. If we are to see hosted storage really become useful, then upload speeds are going to have to be significantly increased. (And, frankly, network reliability could stand a little improvement.)
And it's an interesting question. Why, in a house with multiple computers, do you not have a dedicated machine somewhere and store all your files on it? It makes a lot more sense than having files spread at random amongst all those machines.
My own solution to this is a portable USB zip drive. I use this to carry stuff about between my machines, and between work and home if needed.
I'm not sure that the suggestion of using a real computer (and an Ultra 2 certainly qualified as a real computer) as the server though. Yes, I know that Solaris is an absolute doddle to administer (yes, really - once you've got to know it). But it's bad enough that everyone owning a Windows PC has to be a systems administrator, without expanding that even further. Even though it pays my wages, I'm a firm believer that when it comes to systems administration, less is definitely better.
Using a general-purpose computer doesn't necessarily make sense to me. (The one situation where it really comes into its own is if you were to use it as something like a Sun Ray Server.) But generally, some sort of appliance seems to make more sense.
And an appliance running a cut down OpenSolaris with ZFS would be a stunner.
The only downside to a server is that part of the assumption is that it's always on. I'm not sure that we should be encouraging that and the accompanying waste of power when there's so much damage already being done to the environment.
Hosted services - grid, if you like - also have limitations that are painful. They neatly solve the administration, availability, and backup problems, though. The biggest problem I see is that upload speeds on my internet connection are absolutely pathetic. Most internet connectivity is highly asymmetric - fast download, with just enough bandwidth the other way to handle the administrative packets and not a lot more. If we are to see hosted storage really become useful, then upload speeds are going to have to be significantly increased. (And, frankly, network reliability could stand a little improvement.)
Subscribe to:
Posts (Atom)