|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Re: [Crash-utility] xencrash fixes for xen-3.3.0
On 7/10/08 15:35, "Dave Anderson" <anderson@xxxxxxxxxx> wrote:
>> PERCPU_SHIFT has only ever been 12 or 13 so far, and it's unlikely to ever
>> get smaller. Ongoing, we could help you out by defining some useful label in
>> our linker script. For example, __per_cpu_shift = PERCPU_SHIFT (or
>> '__per_cpu_start + PERCPU_SHIFT', as I'm not sure about creating labels
>> outside the virtual address ranges defined by the object file).
>>
>> -- Keir
>
> Yep, that's fine too, but for now Oda-san's patch will suffice now as
> long as the smallest possible percpu data section on the x86 arch with
> a PERCPU_SHIFT of 13 will always overflow into a space greater than 4k.
> So I'm still curious, because I note that on a RHEL5 x86_64 hypervisor
> the per-cpu data space is 1576 bytes, and presumably smaller on an x86.
> Was there a new data structure that forced the issue? And does it force
> the issue on both arches?
PERCPU_SHIFT has to be big enough that the per-cpu data area happens to be
smaller than 1<<PERCPU_SHIFT bytes. This relationship is not enforced at
build time but we BUG_ON() it early during boot. Indeed at some point during
3.3 development some big structs got dumped in the per-cpu area and
increased its size beyond 2^12. Hence we BUG()ed and hence we bumped to
2^13.
What this does mean is that we might, on some builds, actually have data
area < 4kB even while we have PERCPU_SHIFT==13. I think it's unlikely in
practice though since I believe we're well over the 4kB boundary now.
I don't think Xen/ia64 uses this same implementation technique. It's
probably more like Linux's percpu data area implementation.
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|