[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] arm: use a per-VCPU stack



On Sat, 2012-02-18 at 13:52 +0000, Tim Deegan wrote:
> At 16:49 +0000 on 15 Feb (1329324592), Ian Campbell wrote:
> > +struct pcpu_info {
> > +    unsigned int processor_id;
> > +    struct vcpu *current_vcpu;
> > +};
> > +
> > +DECLARE_PER_CPU(struct pcpu_info, pcpu_info);
> 
> > +static inline struct pcpu_info *get_pcpu_info(void)
> > +{
> > +    return &this_cpu(pcpu_info);
> > +}
> > +
> 
> I don't think it's worth declaring a struct and accessors for this; we
> should just have current_vcpu as an ordinary per-cpu variable.

That makes sense.

> Storing the CPU ID in the per-pcpu area only happens to work because
> per-cpu areas are a noop right now.  I have a patch that re-enables them
> properly but for that we'll need a proper way of getting the CPU id.

I had imagined that we would have per pVCPU page tables so the current
CPU's per-pcpu area would always be at the same location. If that is not
(going to be) the case then I'll stash it on the VCPU stack instead.

Thinking about it now playing tricks with the PTs does make it tricky on
the rare occasions when you want to access another pCPUs per area.

Speaking of per-cpu areas -- I did notice a strange behaviour while
debugging this. It seemed that a barrier() was not sufficient to keep
the processor from caching the value of "current" in a register (i.e it
would load into r6 before the barrier and use r6 after). I figured this
was probably an unfortunate side effect of the current nobbled per-pcpu
areas and would be fixed as part of your SMP bringup stuff.

> We could use the physical CPU ID register; I don't know whether it
> would be faster to stash the ID on the (per-vcpu) stack and update it
> during context switch.

Does h/w CPU ID correspond to the s/w one in our circumstances? Might
they be very sparse or something inconvenient like that?

I'd expect pulling things from registers to be faster in the normal case
but in this specific scenario I'd imagine the base of the stack will be
pretty cache hot since it has all the guest state in it etc which we've
probably fairly recently pushed to or are about to pop from.

[...]

> Aside from that, this patch looks OK to me.

Thanks I'll repost next week with the fixes you suggest.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.