I had to spend a little time recently fixing a crash in OpenJDK on Solaris SPARC.
What we're seeing is, from the hs_err file:
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0xffffffff57c745a8, pid=18442, tid=37
...
# Problematic frame:
# V [libjvm.so+0x7745a8] G1CollectedHeap::allocate_new_tlab(unsigned long, unsigned long, unsigned long*)+0xb8
Well that's odd. I only see this on SPARC, and I've seen it sporadically on Tribblix during the process of continually building OpenJDK on SPARC, but haven't seen it on Solaris. Until a customer hit it in production, which is rather a painful place to find a reproducer.
In terms of source, this is located in the file src/hotspot/share/gc/g1/g1CollectedHeap.cpp (all future source references will be relative to that directory), and looks like:
HeapWord* G1CollectedHeap::allocate_new_tlab(size_t min_size,
size_t requested_size,
size_t* actual_size) {
assert_heap_not_locked_and_not_at_safepoint();
assert(!is_humongous(requested_size), "we do not allow humongous TLABs");
return attempt_allocation(min_size, requested_size, actual_size);
}
That's incredibly simple. There's not much that can go wrong there, is there?
The complexity here is that a whole load of functions get inlined. So what does it call? You find yourself in a twisty maze of passages, all alike. But anyway, the next one down is
inline HeapWord* G1CollectedHeap::attempt_allocation(size_t min_word_size,
size_t desired_word_size,
size_t* actual_word_size) {
assert_heap_not_locked_and_not_at_safepoint();
assert(!is_humongous(desired_word_size), "attempt_allocation() should not "
"be called for humongous allocation requests");
HeapWord* result = _allocator->attempt_allocation(min_word_size, desired_word_size, actual_word_size);
if (result == NULL) {
*actual_word_size = desired_word_size;
result = attempt_allocation_slow(desired_word_size);
}
assert_heap_not_locked();
if (result != NULL) {
assert(*actual_word_size != 0, "Actual size must have been set here");
dirty_young_block(result, *actual_word_size);
} else {
*actual_word_size = 0;
}
return result;
}
That then calls an inlined G1Allocator::attempt_allocation() in g1Allocator.hpp. That calls current_node_index(), which looks safe and then there are a couple of calls to mutator_alloc_region()->attempt_retained_allocation() and mutator_alloc_region()->attempt_allocation(), which come from g1AllocRegion.inline.hpp and both ultimately call a local par_allocate(), which then calls par_allocate_impl() or par_allocate() in heapRegion.inline.hpp.
Now, mostly all these are doing is calling something else. The one really complex piece of code is in par_allocate_impl() which contains
...
do {
HeapWord* obj = top();
size_t available = pointer_delta(end(), obj);
size_t want_to_allocate = MIN2(available, desired_word_size);
if (want_to_allocate >= min_word_size) {
HeapWord* new_top = obj + want_to_allocate;
HeapWord* result = Atomic::cmpxchg(&_top, obj, new_top);
// result can be one of two:
// the old top value: the exchange succeeded
// otherwise: the new value of the top is returned.
if (result == obj) {
assert(is_object_aligned(obj) && is_object_aligned(new_top), "checking alignment");
*actual_size = want_to_allocate;
return obj;
}
} else {
return NULL;
}
} while (true);
}
Right, let's go back to the crash. We can open up the core file in
mdb, and look at the stack with $C
ffffffff7f39d751 libjvm.so`_ZN7VMError14report_and_dieEP6ThreadjPhPvS3_+0x3c(
101cbb1d0?, b?, fffffffcb45dea7c?, ffffffff7f39ecb0?, ffffffff7f39e9a0?, 0?)
ffffffff7f39d811 libjvm.so`JVM_handle_solaris_signal+0x1d4(b?,
ffffffff7f39ecb0?, ffffffff7f39e9a0?, 0?, ffffffff7f39e178?, 101cbb1d0?)
ffffffff7f39dde1 libjvm.so`_ZL17javaSignalHandleriP7siginfoPv+0x20(b?,
ffffffff7f39ecb0?, ffffffff7f39e9a0?, 0?, 0?, ffffffff7e7dd370?)
ffffffff7f39de91 libc.so.1`__sighndlr+0xc(b?, ffffffff7f39ecb0?,
ffffffff7f39e9a0?, fffffffcb4b38afc?, 0?, ffffffff7f20c7e8?)
ffffffff7f39df41 libc.so.1`call_user_handler+0x400((int) -1?,
(siginfo_t *) 0xffffffff7f39ecb0?, (ucontext_t *) 0xc?)
ffffffff7f39e031 libc.so.1`sigacthandler+0xa0((int) 11?,
(siginfo_t *) 0xffffffff7f39ecb0?, (void *) 0xffffffff7f39e9a0?)
ffffffff7f39e5b1 libjvm.so`_ZN15G1CollectedHeap17allocate_new_tlabEmmPm+0xb8(
10013d030?, 100?, 520?, ffffffff7f39f000?, 0?, 0?)
What you see here is the allocate_new_tlab() at the botton, it throws a signal, the signal handler catches it, passes it ultimately to JVM_handle_solaris_signal() which bails, and the JVM exits.
We can look at the signal. It's at address 0xffffffff7f39ecb0 and is of type siginfo_t, so we can just print it
java:core> ffffffff7f39ecb0::print -t siginfo_t
and we first see
siginfo_t {
int si_signo = 0t11 (0xb)
int si_code = 1
int si_errno = 0
...
OK, the signal was indeed 11 = SIGSEGV. The interesting thing is the si_code of 1, which is defined as
#define SEGV_MAPERR 1 /* address not mapped to object */
Ah. Now, in the jvm you actually see a lot of SIGSEGV, but a lot of them are handled by that mysterious JVM_handle_solaris_signal(). In particular, it'll handle anything with SEGV_ACCERR which is basically something running off the end of an array.
Further down, you can see the fault address
struct __fault = {
void *__addr = 0x10
int __trapno = 0
caddr_t __pc = 0
int __adivers = 0
}
So, we're faulting on address 0x10. Yes, you try messing around down there and you will fault.
That confirms the crash is a SEGV. What are we actually trying to do? We can disassemble the allocate_new_tlab() function and see what's happening - remember the crash was at offset 0xb8
java:core> libjvm.so`_ZN15G1CollectedHeap17allocate_new_tlabEmmPm::dis
...
libjvm.so`_ZN15G1CollectedHeap17allocate_new_tlabEmmPm+0xb8: ldx [%i4 + 0x10], %i5
That's interesting, 0x10 was the fault address. What's %i4 then?
java:core> ::regs
%i4 = 0x0000000000000000
Yep. Given that, we'll try and read 0x10, giving the SEGV we see.
There's a little more context around that call site. A slightly
expanded view is
libjvm.so`_ZN15G1CollectedHeap17allocate_new_tlabEmmPm+0xa0: nop
libjvm.so`_ZN15G1CollectedHeap17allocate_new_tlabEmmPm+0xa4: add %
i5, %g1, %g1
libjvm.so`_ZN15G1CollectedHeap17allocate_new_tlabEmmPm+0xa8: casx [
%g3], %i5, %g1
libjvm.so`_ZN15G1CollectedHeap17allocate_new_tlabEmmPm+0xac: cmp %
i5, %g1
libjvm.so`_ZN15G1CollectedHeap17allocate_new_tlabEmmPm+0xb0: be,pn %
xcc, +0x160 <libjvm.so`_ZN15G1CollectedHeap17allocate_new_tlabEmmPm+0x210>
libjvm.so`_ZN15G1CollectedHeap17allocate_new_tlabEmmPm+0xb4: nop
libjvm.so`_ZN15G1CollectedHeap17allocate_new_tlabEmmPm+0xb8: ldx [
%i4 + 0x10], %i5
Now, the interesting thing here is the casx (compare and swap) instruction. That lines up with the Atomic::cmpxchg() in par_allocate_impl() that we were suspecting above. So the crash is somewhere around there.
It turns out there's another way to approach this. If we compile without optimization then effectively we turn off the inlining. The way to do this is to add an entry to the jvm Makefile via make/hotspot/lib/JvmOverrideFiles.gmk
...
else ifeq ($(call isTargetOs, solaris), true)
ifeq ($(call isTargetCpuArch, sparc), true)
# ptribble port tweaks
BUILD_LIBJVM_g1CollectedHeap.cpp_CXXFLAGS += -O0
endif
endif
If we rebuild (having touched all the files in the directory to force
make to rebuild everything correctly), and run again, we get the full
call stack:
Now the crash is
# V [libjvm.so+0x80cc48] HeapRegion::top() const+0xc
which we can expand to the following stack leading up to where it goes
into the signal handler.:
ffffffff7f39dff1 libjvm.so`_ZNK10HeapRegion3topEv+0xc(0?, ffffffff7f39ef40?,
101583e38?, ffffffff7f39f020?, fffffffa46de8038?, 10000?)
ffffffff7f39e0a1 libjvm.so`_ZN10HeapRegion17par_allocate_implEmmPm+0x18(0?,
100?, 10000?, ffffffff7f39ef60?, ffffffff7f39ef40?, 8f00?)
ffffffff7f39e181
libjvm.so`_ZN10HeapRegion27par_allocate_no_bot_updatesEmmPm+0x24(0?, 100?,
10000?, ffffffff7f39ef60?, 566c?, 200031?)
ffffffff7f39e231
libjvm.so`_ZN13G1AllocRegion12par_allocateEP10HeapRegionmmPm+0x44(100145440?,
0?, 100?, 10000?, ffffffff7f39ef60?, 0?)
ffffffff7f39e2e1 libjvm.so`_ZN13G1AllocRegion18attempt_allocationEmmPm+0x48(
100145440?, 100?, 10000?, ffffffff7f39ef60?, 3?, fffffffa46ceff48?)
ffffffff7f39e3a1 libjvm.so`_ZN11G1Allocator18attempt_allocationEmmPm+0xa4(
1001453b0?, 100?, 10000?, ffffffff7f39ef60?, 7c0007410?, ffffffff7f39ea41?)
ffffffff7f39e461 libjvm.so`_ZN15G1CollectedHeap18attempt_allocationEmmPm+0x2c(
10013d030?, 100?, 10000?, ffffffff7f39ef60?, 7c01b15e8?, 0?)
ffffffff7f39e521 libjvm.so`_ZN15G1CollectedHeap17allocate_new_tlabEmmPm+0x24(
10013d030?, 100?, 10000?, ffffffff7f39ef60?, 0?, 0?)
So yes, this confirms that we are indeed in par_allocate_impl() and
it's crashing on the very first line of the code segment I showed
above, where it calls top(). All top() does is return the _top member
of a HeapRegion.
So the only thing that can happen here is that the HeapRegion itself
is NULL. Then the _top member is presumably at offset 0x10, and trying
to access it gives the SIGSEGV.
Now, in G1AllocRegion::attempt_allocation() there's an assert:
HeapRegion* alloc_region = _alloc_region;
assert_alloc_region(alloc_region != NULL, "not initialized properly");
However, asserts aren't compiled into production builds.
But the fix here is to fail if we've got NULL and let the caller
retry. There are a lot of calls here, and the general approach is to
return NULL if anything goes wrong, so I do the same for this extra
failure case, adding the following:
if (alloc_region == NULL) {
return NULL;
}
With that, no more of those pesky crashes. (There might be others
lurking elsewhere, of course.)
Of course, what this doesn't explain is why the HeapRegion wasn't
correctly initialized in the first place. But that's another problem
entirely.