[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch 4/4] Refining Xsave/Xrestore support


  • To: Jan Beulich <JBeulich@xxxxxxxxxx>
  • From: Haitao Shan <maillists.shan@xxxxxxxxx>
  • Date: Thu, 28 Oct 2010 10:52:42 +0800
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Weidong Han <weidong.han@xxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Delivery-date: Wed, 27 Oct 2010 19:53:40 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=EfbBKJ/28273wogl8SAStSZt2yCic9LRZ4F7oG0NrfgU69YqWFZjU3elLmNXWZ4oZg 5L3+4CUzd7jOOxApIU7iygfpYL7zsEIMktRaZIJlecvVnJrBjGMhT0qukFZtP4BRoFbd Igkuosv7v92lb5uophlTSnNN3y0YrOn2PNBdQ=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi, Jan,

Thanks for reviewing. I am really not good in coding. :)
Please see my comments embedded.

2010/10/27 Jan Beulich <JBeulich@xxxxxxxxxx>:
>>@@ -189,7 +189,8 @@ static int uncanonicalize_pagetable(
>> /* Load the p2m frame list, plus potential extended info chunk */
>> static xen_pfn_t *load_p2m_frame_list(
>>     xc_interface *xch, struct restore_ctx *ctx,
>>-    int io_fd, int *pae_extended_cr3, int *ext_vcpucontext)
>>+    int io_fd, int *pae_extended_cr3, int *ext_vcpucontext,
>>+    int *vcpuextstate, uint64_t *vcpuextstate_size)
>
> What value is it to have vcpuextstate_size (here any elsewhere in
> the patch) be a 64-bit quantity? In 32-bit tools exceeding 4G here
> wouldn't work anyway, and iirc the value really can't exceed 32 bits
> anyway.
Yes. Using 64-bit is my preference when I cannot guarantee the size is
below 4G. The size of XSAVE_AREA is 4G max since it is reported by
ECX. :) However, I have currently two (maybe future more XCRx)
registers to save. So........ But it unlikely to reach the 4G bound in
real life.

>
>>@@ -781,6 +781,31 @@ struct xen_domctl_mem_sharing_op {
>> typedef struct xen_domctl_mem_sharing_op xen_domctl_mem_sharing_op_t;
>> DEFINE_XEN_GUEST_HANDLE(xen_domctl_mem_sharing_op_t);
>>
>>+/* XEN_DOMCTL_setvcpuextstate */
>>+/* XEN_DOMCTL_getvcpuextstate */
>>+struct xen_domctl_vcpuextstate {
>>+    /* IN: VCPU that this call applies to. */
>>+    uint32_t         vcpu;
>>+    /*
>>+     * SET: xfeature support mask of struct (IN)
>>+     * GET: xfeature support mask of struct (IN/OUT)
>>+     * xfeature mask is served as identifications of the saving format
>>+     * so that compatible CPUs can have a check on format to decide
>>+     * whether it can restore.
>>+     */
>>+    uint64_t         xfeature_mask;
>
> uint64_aligned_t.
>
>>+    /*
>>+     * SET: Size of struct (IN)
>>+     * GET: Size of struct (IN/OUT)
>>+     */
>>+    uint64_t         size;
>
> Here too.
I will add that in my updated patch.

>
>>+#if defined(__i386__) || defined(__x86_64__)
>
> Why? The structure makes no sense without the following field, so
> either the whole structure is x86-specific, or the field is generic as
> is the rest of the structure.
>
>>+    XEN_GUEST_HANDLE_64(uint64) buffer;
>>+#endif
>>+};
I prototyped my hypercall according to another hypercall, which is
also X86 specific.Though I feel some ugly, I just follow the existing
coding style....
I will include the whole structure.

/* XEN_DOMCTL_set_ext_vcpucontext */
/* XEN_DOMCTL_get_ext_vcpucontext */
struct xen_domctl_ext_vcpucontext {
    /* IN: VCPU that this call applies to. */
    uint32_t         vcpu;
    /*
     * SET: Size of struct (IN)
     * GET: Size of struct (OUT)
     */
    uint32_t         size;
#if defined(__i386__) || defined(__x86_64__)
    /* SYSCALL from 32-bit mode and SYSENTER callback information. */
    /* NB. SYSCALL from 64-bit mode is contained in vcpu_guest_context_t */
    uint64_aligned_t syscall32_callback_eip;
    uint64_aligned_t sysenter_callback_eip;
    uint16_t         syscall32_callback_cs;
    uint16_t         sysenter_callback_cs;
    uint8_t          syscall32_disables_events;
    uint8_t          sysenter_disables_events;
#endif


>
> Jan
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.