Today I was playing with logstash, with the plan to produce a real-time scrolling view of our web traffic.
It's easy enough. Run a logstash shipper on each node, feed everything into redis, get logstash to pull from redis into elasticsearch, then run the logstash front-end and use Kibana to create a dashboard.
Then the desire for efficiency strikes. We're running Solaris zones, and there are a lot of them. Each logstash instance takes a fair chunk of memory, so it seems like a waste to run one in each zone.
So what I wanted to do was run a single copy of logstash in the global zone, and get it to read all the zone logs, yet present the data just as though it had been run in the zone.
The first step was to define which logs to read. The file input can take wildcards, leading to a simple pattern:
input {
file {
type => "apache"
path => "/storage/*/opt/proquest/*/apache/logs/access_log"
}
}
There's a ZFS pool storage, each zone has a zfs file system named after the zone. So the name of the zone is the directory under /storage. So I can pick out the name of the zone and put it into a variable called zonename like so:
grok {
type => "apache"
match => ["path","/storage/%{USERNAME:zonename}/%{GREEDYDATA}"]
}
(If it looks odd to use the USERNAME pattern, the naming rules for our zones happen to be the same as for user names, so I use an existing pattern rather than define a new one.)
I then want the host entry associated with this log to be that of the zone, rather than the default of the global zone. So I mutate the host entry:
mutate {
type => "apache"
replace => [ "host","%{zonename}.our.company.name" ]
}
And that's pretty much it. It's very simple, but most of the documentation I could find was incorrect in the sense that it applied to old versions of logstash.
There were a couple of extra pieces of information that I then found it useful to add. The simplest was to duplicate the original host entry into a servername, so I can aggregate all the traffic associated with a physical host. The second was to pick out the website name from the zone name (in this case, the zone name is the short name of the website, with a suffix appended to distinguish the individual zones).
grok {
type => "apache"
match => ["zonename","%{WORD:sitename}-%{GREEDYDATA}"]
}
Then sitename contains the short name of the site, again allowing me to aggregate the statistics from all the zones that serve that site.
3 comments:
Hi Peter,
"Help me, Obi Wan Kenobi, you're my only hope."
we have a similar configuration, but are facing the following problem:
As logstash in the global zone opens a logfile in a local zone, the zone has trouble halting or rebooting, becouse it can't umount the filesystem, which contains the logfile.
Like you, I want to avoid having a logstash client in each zone.
Thanks in advance,
Matthias
We didn't see that. The file is accessed by its path in the global zone, not by its path in the zone, so it doesn't block the unmount.
(To be clear - we didn't delegate filesystems to zones, we always managed them from the global zone and just used lofs to make them appear at the right place in the zone. Now the root dataset does get delegated on ZFS, but that should be disposable with data stored somewhere separate.)
What we did see is that we couldn't destroy the filesystem, although the simplest way to avoid that was to simply delete the relevant files and wait a few seconds for logstash to forget about them.
Now it's clear!
Compared to me, you surely used the correct path to your logfiles from a global zones point of view. I was using the path of the lofs mount of the zones (under the zoneroot path). Obviously the zone wants to unmont this path and it would also disapear from the global zones view.
Now I'm using the global zones local path to my logs and it worked.
Thanks a lot for the hint!
Cheers,
Matthias
Post a Comment