[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 07/32] xen/x86: fix arch_set_info_guest for HVM guests



On 05/08/15 10:53, Roger Pau Monné wrote:
> El 04/08/15 a les 20.08, Andrew Cooper ha escrit:
>> On 03/08/15 18:31, Roger Pau Monné wrote:
>>>> Therefore, I am leaning slightly towards the specify the internals side
>>>> of things, which removes some complexity from the Xen side of the 
>>>> hypercall.
>>> I've updated the proposed structure, so now it looks like:
>>>
>>> (I still need to figure out how to order the bits in the access rights 
>>> (_ar) fields.)
>> All struct's need a flags register.
> I was unsure about adding the {e/r}flags register, I will add it in new
> versions.

For first-gen VT-x, CR0.PE in unable to be cleared.   HVMLoader has to
do a dance with vm86 mode, which is where HVM_PARAM_IDENT_PT gets used.

I don't realistically expect this to be a problem for DMlite guests.

>
>>> struct vcpu_hvm_x86_16 {
>>>     uint16_t ax;
>>>     uint16_t cx;
>>>     uint16_t dx;
>>>     uint16_t bx;
>>>     uint16_t sp;
>>>     uint16_t bp;
>>>     uint16_t si;
>>>     uint16_t di;
>>>     uint16_t ip;
>>>
>>>     uint32_t cr[8];
>> Definitely no need for anything other than cr0 and 4 in 16 bit mode.
> Yes. Would you rather prefer to spell out the exact control registers
> that are going to be used instead of using an array?

I would suggest so.

CRs 1,5-7 are unconditionally #UD to attempt to use. CR2 is useless
until you enter the #PF handler. CR8 may (hardware dependent) have the
TPR in it, which is useless until the IDT is set up and interrupts have
been enabled.

>
> For 16bit mode this is going to be CR0/CR4, for 32bit or long mode mode
> CR0/CR3/CR4.

Agreed.

>
>>>     uint32_t cs_base;
>>>     uint32_t ds_base;
>>>     uint32_t ss_base;
>>>     uint32_t cs_limit;
>>>     uint32_t ds_limit;
>>>     uint32_t ss_limit;
>>>     uint16_t cs_ar;
>>>     uint16_t ds_ar;
>>>     uint16_t ss_ar;
>>> };
>>>
>>> struct vcpu_hvm_x86_32 {
>>>     uint32_t eax;
>>>     uint32_t ecx;
>>>     uint32_t edx;
>>>     uint32_t ebx;
>>>     uint32_t esp;
>>>     uint32_t ebp;
>>>     uint32_t esi;
>>>     uint32_t edi;
>>>     uint32_t eip;
>>>
>>>     uint32_t cr[8];
>> Don't need cr's 5-8.
>>
>>>     uint64_t efer;
>>>
>>>     uint32_t cs_base;
>>>     uint32_t ds_base;
>>>     uint32_t ss_base;
>>>     uint32_t cs_limit;
>>>     uint32_t ds_limit;
>>>     uint32_t ss_limit;
>>>     uint16_t cs_ar;
>>>     uint16_t ds_ar;
>>>     uint16_t ss_ar;
>> You need selector entries for each segment as well.
> Really? What's the point in having the selector if we don't have a GDT,
> and we allow loading the cached part, which is the relevant one.

push %cs; push 1f; lret

At all points when segment details are updated, it is the responsibility
of software to ensure that the details match with the GDT/LDT entry. 
See for example the Intel and AMD manuals for syscall/sysenter where
similar "updating segment details behind the scenes" occurs.

>
>>> };
>>>
>>> struct vcpu_hvm_x86_64 {
>>>     uint64_t rax;
>>>     uint64_t rcx;
>>>     uint64_t rdx;
>>>     uint64_t rbx;
>>>     uint64_t rsp;
>>>     uint64_t rbp;
>>>     uint64_t rsi;
>>>     uint64_t rdi;
>>>     uint64_t r8;
>>>     uint64_t r9;
>>>     uint64_t r10;
>>>     uint64_t r11;
>>>     uint64_t r12;
>>>     uint64_t r13;
>>>     uint64_t r14;
>>>     uint64_t r15;
>>>     uint64_t rip;
>>>
>>>     uint64_t cr[8];
>>>     uint64_t efer;
>> What has happened to the segment information?  It cannot be omitted
>> completely, even in 64bit.
> I had in mind to set them to the values required for long mode:
>
> base = 0
> limit = 0xffff
>
> and the attributes field for CS to:
>
> ar = 0x29b (L bit set)
>
> And for SS/DS:
>
> ar = 0x093
>
> But maybe it makes sense to allow setting them in case someone wants to
> start in compatibility mode.

Agreed.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.