One of the headline features in Solaris 10 is DTrace, allowing you to probe the inner workings of a system in more detail than ever before.
I'm no expert, but I like some of Brendan Gregg's DTrace Tools.
In fact, my favourite so far is execsnoop.
(I think this says rather more about the sort of activity on my systems than anything else. We don't run significant databases or servers; many systems run random junk. Desktop use; development use; loads of badly written shells scripts. And I don't need DTrace to tell me that most of the compute applications are awful.)
So execsnoop tells me how badly written some of this scripting is.
The worst I've found so far is mozilla. This isn't a binary - almost 60 shell commands happen before the mozilla binary is reached. And essentially all this scripting is completely pointless - the parameters that are being set are fixed and don't need to be worked out afresh each time you launch it.
Another interesting thing I spotted was uname being run when I logged in. This turned out to be my tcsh startup working out what sort of machine I was using. It turns out that tcsh already knows exactly what sort of system it's running on. The OSTYPE and MACHTYPE environment variables tell you all you need to know. I knew this already, of course - but DTrace revealed that there was one place I had missed. (And also - in tcsh you don't need to exec any comands to set a dynamic prompt: tcsh has builtin variables you can use.)
I've also found unnecessary duplication of work in various system monitoring shell scripts, and lots of simple cases of inefficient coding. Most common things I see are excessive calls to uname (often generic scripts finding out that they're using Solaris, which they ought to have known already) and excessive use of expr (either learn to iterate over $# correctly, or rewrite in a more advanced shell like ksh that can do arithmetic).
In short: try leaving execsnoop running and see what stupidities show up!