I've just released an updated version of KAR, the kstat activity reporter, and a matching JKstat update.
This version carries on with the original aim of simply saving all the kstats and processing them at point of use, rather than trying to predict which output might be useful at the start.
Originally, I was using kstat -p output, and I've had to move on from that. There were a couple of minor issues with the kstat -p output that I ended up having to fix. The major one was that the times from IO kstats were converted from nanoseconds into floating-point seconds. I had a quick look to see if I could supply a modified kstat, but the conversion takes place deep inside perl - it's not just a simple presentation tweak. I could have written special-case parsing code, but it's easier and more reliable to simple generate the correct data in the first place. (Simply printing kstats is pretty trivial to code, so I did.) Having fixed that, I made a couple of other minor changes to make the output more complete (including the kstat type) and smaller and quicker to parse (eliminating all duplicates of the name of the kstat).
The overhead of running kar is reduced. The storage requirement is halved from the first version, and the cpu for a data collection is cut from over 0.2s to about 0.01s on my machine. Which is all good - you want to minimise the perturbation on the system caused by any monitoring.
I added a little sar output emulator. Just to prove that I could, printing out the cpu utilization like the default sar output. The more interesting one was to generate iostat output, which is what led me to create a custom data collector. Of course, now that I have a complete set of kstats to munge, most of the stat tools in Solaris could be replicated to show what they would have looked like over time (albeit at fairly low time resolution). And, once the CLI tools are exhausted, generation of graphs is next on the list.
Sunday, May 23, 2010
Saturday, May 22, 2010
RHCE
This last week I've been away on a training course, the Red Hat RHCE rapid track course. It's been a long time since I did formal training (anything beyond the odd hour or so, at least), so I wasn't sure how it was going to go.
All in all a success, I think. I certainly learnt a lot, and passed the exam.
As anyone who has done the RHCE exam knows, I can't really talk about any of the details. But I think the following would be helpful to others like myself:
Red Hat do a transition course for Solaris Administrators. I thought about that, and skipped it. I had no problem adjusting to RHEL, so I'm not sure what value the course for Solaris admins would be - I would have thought that if you're a bit unsure, then doing the full RHCE course rather that the Solaris course with the fast-track would be a better bet.
All in all a success, I think. I certainly learnt a lot, and passed the exam.
As anyone who has done the RHCE exam knows, I can't really talk about any of the details. But I think the following would be helpful to others like myself:
- If you're experienced in unix systems administration, and are used to administering applications, you should cope. Easily.
- With experience on other platforms, the rapid-track course is very useful.
- The RHCE exam covers a lot of application ground. For me, it was really a case of learning how Red Hat has its own tweaks (and how things like the firewall and SELinux interact with applications).
- I found that going away for the course was really helpful. It allowed me to focus on the course without distractions.
Red Hat do a transition course for Solaris Administrators. I thought about that, and skipped it. I had no problem adjusting to RHEL, so I'm not sure what value the course for Solaris admins would be - I would have thought that if you're a bit unsure, then doing the full RHCE course rather that the Solaris course with the fast-track would be a better bet.
Sunday, May 09, 2010
JKstat 0.37
So JKstat now reaches version 0.37, with the usual spread of minor fixes, enhancements, and new features.
One enhancement is the extension to 64-bit. For JKstat itself, there's no need to run it in 64-bit mode, but I need 64-bit libraries to allow JKstat to run inside a 64-bit JVM. One example of this would be SolView, which would have to run in 64-bit mode to show the sizes of 64-bit processes using JProc.
A new feature is the ability to generate png images directly from the cli, using input such as the
One of the minor fixes this time is to support the recent 1.3 release of JavaFX . It looks like JavaFX isn't binary compatible between releases, so code needs to be rebuilt to match whichever version you're using, which is a shame. Also, this version optimized away some of my method calls and I needed to fool it into not doing so.
One enhancement is the extension to 64-bit. For JKstat itself, there's no need to run it in 64-bit mode, but I need 64-bit libraries to allow JKstat to run inside a 64-bit JVM. One example of this would be SolView, which would have to run in 64-bit mode to show the sizes of 64-bit processes using JProc.
A new feature is the ability to generate png images directly from the cli, using input such as the
kstat -p
archives used by kar.One of the minor fixes this time is to support the recent 1.3 release of JavaFX . It looks like JavaFX isn't binary compatible between releases, so code needs to be rebuilt to match whichever version you're using, which is a shame. Also, this version optimized away some of my method calls and I needed to fool it into not doing so.
Wednesday, May 05, 2010
Solaris Process data from Java
I've got a java interface, called jproc to process data in Solaris, using the procfs filesystem.
In the latest version, apart from starting on a tree-view (currently known to be buggy and woefully incomplete), there are a couple of little technical tricks I had to learn.
The first is that accessing the process data, particularly sizes, of a 64-bit application, requires a 64-bit application. That's why tools such as top, ps, and prstat are 64-bit (via isaexec). Now, there's a 64-bit java for Solaris, so I needed to compile my JNI library in 64-bit mode too. Normally you just call
The other thing I wanted to do was to make the display of items in tables a little more readable. JTable just picks up the type of data and defaults to a fairly basic display. My first attempt was to convert my data to pretty strings, and display those, but that had a couple of snags: by default strings get left-justified, which wasn't what I wanted, and sorting broke because it sorted the strings rather than the underlying numerical data.
The answer, of course, is to use a custom TableCellRenderer. This only affects the presentation, so that sorting works correctly against the underlying data. So far all I've done is simply humanize some of the values, but so much more is possible.
In the latest version, apart from starting on a tree-view (currently known to be buggy and woefully incomplete), there are a couple of little technical tricks I had to learn.
The first is that accessing the process data, particularly sizes, of a 64-bit application, requires a 64-bit application. That's why tools such as top, ps, and prstat are 64-bit (via isaexec). Now, there's a 64-bit java for Solaris, so I needed to compile my JNI library in 64-bit mode too. Normally you just call
but for 64-bit mode it's a little more complicated than that. In particular, just adding
cc -G
-m64
the way you would expect doesn't work. And it's different on x86 and sparc. So what I've found works, is:
amd64:
cc -Kpic -shared -m64
sparc:
cc -xcode=pic13 -shared -m64
The other thing I wanted to do was to make the display of items in tables a little more readable. JTable just picks up the type of data and defaults to a fairly basic display. My first attempt was to convert my data to pretty strings, and display those, but that had a couple of snags: by default strings get left-justified, which wasn't what I wanted, and sorting broke because it sorted the strings rather than the underlying numerical data.
The answer, of course, is to use a custom TableCellRenderer. This only affects the presentation, so that sorting works correctly against the underlying data. So far all I've done is simply humanize some of the values, but so much more is possible.
Subscribe to:
Posts (Atom)