Following on from some thoughts on ZFS compression, and nudged by one of the comments, what about ZFS dedup?
There's also a somewhat less opinionated article that you should definitely read.
So, my summary: unlike compression, dedup should be avoided unless you have a specific niche use.
Even for a modest storage system, say something in the 25TB range, then you should be aiming for half a terabyte of RAM (or L2ARC). Read the article above. And the point isn't just the cost of an SSD or a memory DIMM, it's the cost of a system that can take enough SSD devices or has enough memory capacity. And then think about a decent size storage system that may scale to 10 times that size. Eventually, the time may come, but my point is that while the typical system you might use today already has cpu power going spare to do compression for you, you're looking at serious engineering to get the capability to do dedup.
We can also see when turning on dedup might make sense. A typical (server) system may have 48G of memory so, scaled from the above, something in the range of 2.5TB of unique data might be a reasonable target. Frankly, that's pretty small, and you actually need to get a pretty high dedup ratio to make the savings worthwhile.
I've actually tested dedup on some data where I expected to get a reasonable benefit: backup images. The idea here is that you're saving similar data multiple times (either multiple backups of the same host, or backups of like data from lots of different hosts). I got a disappointing saving - of order 7% or so. Given the amount of memory we would have needed to put into a box to have 100TB of storage, this simply wasn't going to fly. By comparison, I see 25-50% compression on the same data, and you get that essentially for free. And that's part of the argument behind having compression on all the time, and avoiding dedup entirely.
I have another opinion here as well, which is that using dedup to identify identical data after the fact is the wrong place to do it, and indicates a failure in data management. If you know you have duplicate data (and you pretty much have to know you've got duplicate data to make the decision to enable dedup in the first place) then you ought to have management in place to avoid creating multiple copies of it: snapshots, clones, single-instance storage, or the like. Not generating duplicate data in the first place is a lot cheaper than creating all the multiple copies and then deduplicating them afterwards.
Don't get me wrong: deduplication has its place. But it's very much a niche product and certainly not something that you can just enable by default.
Monday, August 08, 2011
Sunday, August 07, 2011
Thoughts on ZFS compression
Apart from the sort of features that I now take for granted in a filesystem (data integrity, easy management, extreme scalability, unlimited snapshots), ZFS also has built in compression.
I've already noted how this can be used to compress backup catalogs. One important thing here is that it's completely transparent, which isn't true of any scheme that goes around compressing the files themselves.
Recently, I've (finally) started to enable compression more widely, as a matter of course. Certainly on new systems there's no excuse, at the default level of compression at any rate.
There was a caveat there: at the default compression level. The point here being that the default level of compression can get you decent gains and is essentially free: you gain space and reduce I/O for a negligible CPU cost. The more aggressive compression schemes can compress your data more, but having tried them it's clear that there's a significant performance hit: in some cases when I tried it the machine can freeze completely for a few seconds, which is clearly noticeable to users. Newer more powerful machines shouldn't have that problem, and there have been improvements in Solaris as well that keep the rest of the system more responsive. I still feel, though, that enabling more aggressive compression than the default is something that should only be done selectively when you've actually compared the costs and benefits.
So, I'm enabling compression on every filesystem containing regular data from now on.
The exception, still, is large image filesystems. Images in TIFF and JPEG format are already compressed so the benefit is pretty negligible. And the old thumpers we still use extensively have relatively little CPU power (both compared to more modern systems, and for the amount of data and I/O these systems do). Compression here is enabled more selectively.
Given the continuing growth in cpu power - even our entry-level systems are 24-way now - I'm expecting it won't be long before we get to the point where enabling more aggressive compression all the time is going to be a no-brainer.
I've already noted how this can be used to compress backup catalogs. One important thing here is that it's completely transparent, which isn't true of any scheme that goes around compressing the files themselves.
Recently, I've (finally) started to enable compression more widely, as a matter of course. Certainly on new systems there's no excuse, at the default level of compression at any rate.
There was a caveat there: at the default compression level. The point here being that the default level of compression can get you decent gains and is essentially free: you gain space and reduce I/O for a negligible CPU cost. The more aggressive compression schemes can compress your data more, but having tried them it's clear that there's a significant performance hit: in some cases when I tried it the machine can freeze completely for a few seconds, which is clearly noticeable to users. Newer more powerful machines shouldn't have that problem, and there have been improvements in Solaris as well that keep the rest of the system more responsive. I still feel, though, that enabling more aggressive compression than the default is something that should only be done selectively when you've actually compared the costs and benefits.
So, I'm enabling compression on every filesystem containing regular data from now on.
The exception, still, is large image filesystems. Images in TIFF and JPEG format are already compressed so the benefit is pretty negligible. And the old thumpers we still use extensively have relatively little CPU power (both compared to more modern systems, and for the amount of data and I/O these systems do). Compression here is enabled more selectively.
Given the continuing growth in cpu power - even our entry-level systems are 24-way now - I'm expecting it won't be long before we get to the point where enabling more aggressive compression all the time is going to be a no-brainer.
Subscribe to:
Posts (Atom)