[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH 4/8] HVM save restore: vcpu context support



On Thu, Jan 11, 2007 at 11:38:34AM -0600, Anthony Liguori wrote:
> Zhai, Edwin wrote:
> >[PATCH 4/8] HVM save restore: vcpu context support
> >
> >Signed-off-by: Zhai Edwin <edwin.zhai@xxxxxxxxx>
> >
> > typedef uint64_t tsc_timestamp_t; /* RDTSC timestamp */
> >+
> >+/*
> >+ * World vmcs state
> >+ */
> >+struct vmcs_data {
> >+    uint64_t  eip;        /* execution pointer */
> >+    uint64_t  esp;        /* stack pointer */
> >+    uint64_t  eflags;     /* flags register */
> >+    uint64_t  cr0;
> >+    uint64_t  cr3;        /* page table directory */
> >+    uint64_t  cr4;
> >+    uint32_t  idtr_limit; /* idt */
> >+    uint64_t  idtr_base;
> 
> If I read the code correctly, vmcs_data ends up becoming part of:
> 
> +
> +#define HVM_CTXT_SIZE        6144
> +typedef struct hvm_domain_context {
> +    uint32_t cur;
> +    uint32_t size;
> +    uint8_t data[HVM_CTXT_SIZE];
> +} hvm_domain_context_t;
> +DEFINE_XEN_GUEST_HANDLE(hvm_domain_context_t);

vmcs_data ends up as part of vcpu_guest_context. hvm_domain_context is a long 
buffer for saving dev state in HV.

> 
> Which then gets saved to disk.  My first concern would be that struct 
> vmcs_data is not padding safe.  How idtr_limit gets padding may change 
> in future versions of GCC which would break the save format.
> 
> The second is how HVM_CTXT_SIZE gets defined.  Not sure there's a great 

i just define a big enough buffer for hvm context and handle overflow.

the data true length dynamically increase when more vcpus come out, so seems
hard to let control panel know it. 

> way to address though (although the first issue is definitely fixable).
> 
> Regards,
> 
> Anthony Liguori
> 

-- 
best rgds,
edwin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.