WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel][PATCH][VTD][dom0] dom0 vtd patch

On Thu, Oct 23, 2008 at 02:14:52PM +0800, Xu, Anthony wrote:
> Isaku Yamahata wrote:
> > On Wed, Oct 22, 2008 at 05:43:13PM +0800, Xu, Anthony wrote:
> >> dom0/ia64/vtd patch
> >>
> >> Signed-off-by: Anthony Xu <anthony.xu@xxxxxxxxx>
> >
> >
> > In XEN_DOMCTL_get_device_group case.
> >
> >
> > struct xen_domctl_get_device_group {
> >     uint32_t  machine_bdf;      /* IN */
> >     uint32_t  max_sdevs;        /* IN */
> >     uint32_t  num_sdevs;        /* OUT */
> >     XEN_GUEST_HANDLE_64(uint32)  sdev_array;   /* OUT */
> > };
> >
> > XEN_DOMCTL_get_device_group is used by libxc.
> > sdev_array needs ot be handled correctly.
> 
> I guess you are talking about xencomm,
> I don't know how to handle it.
> Can you enlighten me?

How about this?


[IA64] xencomm: support several domctls for VT-d.

This patch adds xencomm support of several domctl hypercalls
for VT-d.

Signed-off-by: Anthony Xu <anthony.xu@xxxxxxxxx>
Signed-off-by: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>

diff -r c87adc976013 arch/ia64/xen/xcom_privcmd.c
--- a/arch/ia64/xen/xcom_privcmd.c      Mon Oct 20 15:29:07 2008 +0100
+++ b/arch/ia64/xen/xcom_privcmd.c      Fri Oct 24 11:22:02 2008 +0900
@@ -340,6 +340,20 @@
                        return -ENOMEM;
                set_xen_guest_handle(kern_op.u.hvmcontext.buffer, (void*)desc);
                break;
+       case XEN_DOMCTL_get_device_group: 
+       {
+               struct xen_domctl_get_device_group *get_device_group =
+                       &kern_op.u.get_device_group;
+               desc = xencomm_map(
+                       xen_guest_handle(get_device_group->sdev_array),
+                       get_device_group->max_sdevs * sizeof(uint32_t));
+               if (xen_guest_handle(get_device_group->sdev_array) != NULL &&
+                   get_device_group->max_sdevs > 0 && desc == NULL)
+                       return -ENOMEM;
+               set_xen_guest_handle(kern_op.u.get_device_group.sdev_array,
+                                    (void*)desc);
+               break;
+       }
        case XEN_DOMCTL_max_vcpus:
        case XEN_DOMCTL_scheduler_op:
        case XEN_DOMCTL_setdomainhandle:
@@ -354,6 +368,12 @@
        case XEN_DOMCTL_set_opt_feature:
        case XEN_DOMCTL_assign_device:
        case XEN_DOMCTL_subscribe:
+       case XEN_DOMCTL_test_assign_device:
+       case XEN_DOMCTL_deassign_device:
+       case XEN_DOMCTL_bind_pt_irq:
+       case XEN_DOMCTL_unbind_pt_irq:
+       case XEN_DOMCTL_memory_mapping:
+       case XEN_DOMCTL_ioport_mapping:
                break;
        case XEN_DOMCTL_pin_mem_cacheattr:
                return -ENOSYS;



-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel