[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 4/4] xen: use per-vcpu TSS and stacks for pv domains

On 09/01/18 20:13, Andrew Cooper wrote:
> (sorry for the top-post. I'm on my phone) 
> I can see you are using ltr, but I don't see anywhere where where you are 
> changing the content on the TSS, or the top-of-stack content.

The per-vcpu TSS is already initialized with the correct stack
addresses, so it doesn't have to be modified later.

> It is very complicated to safely switch IST stacks when you might be taking 
> interrupts.

Using LTR with a new TSS with both stack areas mapped (old and new)
should work, right?


> ~Andrew 
> ________________________________________
> From: Juergen Gross [jgross@xxxxxxxx]
> Sent: 09 January 2018 17:40
> To: Andrew Cooper; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Ian Jackson; konrad.wilk@xxxxxxxxxx; jbeulich@xxxxxxxx
> Subject: Re: [PATCH RFC 4/4] xen: use per-vcpu TSS and stacks for pv domains
> On 09/01/18 18:01, Andrew Cooper wrote:
>> On 09/01/18 14:27, Juergen Gross wrote:
>>> Instead of using the TSS and stacks of the physical processor allocate
>>> them per vcpu, map them in the per domain area, and use those.
>>> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
>> I don't see anything here which updates the fields in the TSS across
>> context switch.  Without it, you'll be taking NMIs/MCEs/DF's on the
>> wrong stack.
> No, I'm doing ltr() with a TSS referencing the per-vcpu stacks. TSS is
> per vcpu, too.
>> I still don't see how your plan is viable in the first place, and is
>> adding substantially more complexity to an answer which doesn't need it.
>> I'm afraid I'm on the verge of a nack unless you can explain how is
>> intended to be safe, and better than what we currently have.
> It is laying the groundwork for a KAISER solution needing no mapping of
> per physical cpu areas in the user guest tables, so isolating the guests
> from each other.
> Juergen

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.