does it gracefully handle the situation where you have thousands of zfs files systems?
And I don't actually know - because I haven't actually tried it.
The original code got the list of zfs filesystems by calling
zfs list
(which is now all it does) and then retrieved all the properties for each one - whether you viewed them or not. I soon scrapped that loop, as it was obvious that it doesn't scale. So I think my code is about as efficient as it can be - it's going to scale as well as the underlying tools do.However, one of the things I've given some thought to - and one of the reasons for writing SolView in the first place - is how to get a handle on systems as they scale up. I'm not talking about managing large numbers of systems (that's an entirely separate problem), I'm talking about looking at a single system where the number of instances of an object may be measured in the tens, hundreds, or thousands.
For example, my T5140s have 128 processor threads. I have systems with 100 virtual network interfaces. Many people have systems with thousands of zfs filesystems. Zones encourage consolidation of multiple application onto a single system (so do other virtualization technologies, but in those other cases you tend to manage the instances independently), so you maybe looking at a system with dozens of zones and thousands of processes running. A thumper has 48 disks, and that's small. Using SMF, a machine typically has a couple of hundred services.
The common thread here is that the number of objects under consideration is larger than you can fit on screen (or in a terminal window, at any rate) in one go. And is thus larger than you can actually see at once. How does your brain cope with reading the output from running
df
on 10,000 filesystems?As we move into this brave new world, we're going to need better tools in the areas of sorting, aggregation, and filtering.
A couple of examples from SolView and (originally) JKstat:
I wrote a lookalike of xcpustate for JKstat. That works great on my desktop. But my desktop isn't big enough to show a copy of it running on a T5140, so I wrote an enhanced version (now shipping with SolView) that shows the aggregate statistics for cores and chips, and allows you to hide the threads or cores, which makes the amount of information thrown at your eyeballs at any given time rather more manageable.
Another example is that the original view of SMF services in SolView was just a linear list. I then wrote a tree view, based on the (apparently) hierarchical names of the services. I found that the imposition of structure - even a structure that's mostly artificial - helps the brain focus on the information rather than be overwhelmed by a flat unstructured list. And that structure breaks the services down into chunks that are small enough for the user to handle easily.
So back to the example of huge numbers of ZFS filesystems. So the plan is to show them in the display grouped in the same hierarchy as the filesystems themselves, rather than as a plain list. And to show snapshots as children of their parent filesystem. So everything possible to break things down into more manageable chunks.
This relies on the underlying data being structured. I'm assuming that when someone has 100,000 filesystems that they are structured somehow - whether by department or hashed by name or whatever - rather than being a great unstructured mess. I can't create order out of chaos, but the tools we use should do everything they can to use what order they can find to create a structure that's easy to comprehend.