[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v11][PATCH 01/16] xen: introduce XENMEM_reserved_device_memory_map



>>> On 22.07.15 at 03:29, <tiejun.chen@xxxxxxxxx> wrote:
> From: Jan Beulich <jbeulich@xxxxxxxx>
> 
> This is a prerequisite for punching holes into HVM and PVH guests' P2M
> to allow passing through devices that are associated with (on VT-d)
> RMRRs.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> Signed-off-by: Tiejun Chen <tiejun.chen@xxxxxxxxx>

You not stating what you did to the patch made me always assume
that you didn't do more than some cosmetic adjustments, if any.
Now that I'm preparing to get the initial part of this series in I find
that you altered it quite a bit, so I'm afraid I'm going to need to
undo some of the adjustments you did.

> @@ -303,6 +342,33 @@ int compat_memory_op(unsigned int cmd, 
> XEN_GUEST_HANDLE_PARAM(void) compat)
>              break;
>          }
>  
> +#ifdef HAS_PASSTHROUGH
> +        case XENMEM_reserved_device_memory_map:
> +        {
> +            struct get_reserved_device_memory grdm;
> +
> +            if ( copy_from_guest(&grdm.map, compat, 1) ||
> +                 !compat_handle_okay(grdm.map.buffer, grdm.map.nr_entries) )
> +                return -EFAULT;
> +
> +            grdm.used_entries = 0;
> +            rc = iommu_get_reserved_device_memory(get_reserved_device_memory,
> +                                                  &grdm);
> +
> +            if ( !rc && grdm.map.nr_entries < grdm.used_entries )
> +                rc = -ENOBUFS;
> +
> +            grdm.map.nr_entries = grdm.used_entries;
> +            if ( grdm.map.nr_entries )

This conditional appears to be a bug: How would the caller know,
upon successful return, that there are no reserved regions?

> @@ -1162,6 +1199,33 @@ long do_memory_op(unsigned long cmd, 
> XEN_GUEST_HANDLE_PARAM(void) arg)
>          break;
>      }
>  
> +#ifdef HAS_PASSTHROUGH
> +    case XENMEM_reserved_device_memory_map:
> +    {
> +        struct get_reserved_device_memory grdm;
> +

+        if ( unlikely(start_extent) )
+            return -ENOSYS;
+

> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -573,7 +573,42 @@ struct xen_vnuma_topology_info {
>  typedef struct xen_vnuma_topology_info xen_vnuma_topology_info_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_vnuma_topology_info_t);
>  
> -/* Next available subop number is 27 */
> +/*
> + * With some legacy devices, certain guest-physical addresses cannot safely
> + * be used for other purposes, e.g. to map guest RAM.  This hypercall
> + * enumerates those regions so the toolstack can avoid using them.
> + */
> +#define XENMEM_reserved_device_memory_map   27
> +struct xen_reserved_device_memory {
> +    xen_pfn_t start_pfn;
> +    xen_ulong_t nr_pages;
> +};
> +typedef struct xen_reserved_device_memory xen_reserved_device_memory_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_reserved_device_memory_t);
> +
> +struct xen_reserved_device_memory_map {
> +    /* IN */
> +    /* Currently just one bit to indicate checkng all Reserved Device 
> Memory. */
> +#define PCI_DEV_RDM_ALL   0x1
> +    uint32_t        flag;
> +    /* IN */
> +    uint16_t        seg;
> +    uint8_t         bus;
> +    uint8_t         devfn;

This makes a mem-op PCI specific. For one, this should therefore be
put in a union, so that non-PCI uses remain possible in the future
without breaking by then existing users of the interface. And with
that I wonder whether this shouldn't use struct physdev_pci_device.

> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -33,6 +33,8 @@
>  #define PCI_DEVFN2(bdf) ((bdf) & 0xff)
>  #define PCI_BDF(b,d,f)  ((((b) & 0xff) << 8) | PCI_DEVFN(d,f))
>  #define PCI_BDF2(b,df)  ((((b) & 0xff) << 8) | ((df) & 0xff))
> +#define PCI_SBDF(s,bdf) (((s & 0xffff) << 16) | (bdf & 0xffff))
> +#define PCI_SBDF2(s,b,df) (((s & 0xffff) << 16) | PCI_BDF2(b,df))

The natural thing for PCI_SBDF() would be

#define PCI_SBDF(s,b,d,f) ...

See for instance
http://lists.xenproject.org/archives/html/xen-devel/2015-07/msg02554.html

I'm going to produce an updated patch, to be sent out later today.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.