[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Multiple platform PCI device ID registries?



On Wed, 2013-11-13 at 11:24 +0000, Ian Campbell wrote:
> On Wed, 2013-11-13 at 11:01 +0000, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Ian Campbell
> > > Sent: 13 November 2013 09:41
> > > To: xen-devel
> > > Cc: Paul Durrant; Ian Jackson; Stefano Stabellini
> > > Subject: Multiple platform PCI device ID registries?
> > > 
> > > http://xenbits.xen.org/docs/unstable-staging/misc/pci-device-
> > > reservations.txt
> > > vs
> > > http://xenbits.xen.org/docs/unstable-
> > > staging/hypercall/x86_64/include,public,hvm,pvdrivers.h.html
> > > 
> > > Are they distinct namespaces? Can someone clarify with a patch to one or
> > > both what the relationship is? How does this relate to the additional
> > > platform device thing which Paul added to qemu?
> > > 
> > > I'm particularly concerned that 0x0002 is different in the two of
> > > them...
> > > 
> > 
> > They are distinct namespaces. The former is PCI device ID, the latter
> > is an abstract 'product number' which is used as part of the QEMU
> > unplug protocol (and actually means nothing to the upstream QEMU
> > platform device code anyway).
> 
> I'm confused then.

And hence the following is as far as I got writing this down:

commit 8d34df2602ee99cc2efde160d3f297afdcaa80f7
Author: Ian Campbell <ian.campbell@xxxxxxxxxx>
Date:   Wed Nov 13 11:31:16 2013 +0000

    docs: clarify PV driver product numbers and PCI device ids
    
    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>

diff --git a/docs/misc/hvm-emulated-unplug.markdown 
b/docs/misc/hvm-emulated-unplug.markdown
index ec9ce83..d0e4af8 100644
--- a/docs/misc/hvm-emulated-unplug.markdown
+++ b/docs/misc/hvm-emulated-unplug.markdown
@@ -21,9 +21,8 @@ drivers):
 2. The drivers read a one-byte protocol version from IO port `0x12`.  If
    this is 0, skip to 6.
 
-3. The drivers write a two-byte product number to IO port `0x12`.  At
-   the moment, the only drivers using this protocol are our
-   closed-source ones, which use product number 1.
+3. The drivers write a two-byte product number to IO port `0x12`.  The
+   product number registry is 
http://xenbits.xen.org/docs/unstable-staging/hypercall/x86_64/include,public,hvm,pvdrivers.h.html#incontents_pvdrivers.
 
 4. The drivers write a four-byte build number to IO port `0x10`.
 
diff --git a/docs/misc/pci-device-reservations.txt 
b/docs/misc/pci-device-reservations.txt
index 19bd9d5..e98b848 100644
--- a/docs/misc/pci-device-reservations.txt
+++ b/docs/misc/pci-device-reservations.txt
@@ -29,3 +29,10 @@ Reservations
 0x0002        | Citrix XenServer (grandfathered allocation for XenServer 6.1)
 0xc000-0xc0ff | Citrix XenServer
 0xc100-0xc1ff | Citrix XenClient
+
+Product Vendor IDs
+==================
+
+Note that this namespace is distinct to the product number used in
+`hvm-emulated-unplug` and enumerated in
+http://xenbits.xen.org/docs/unstable-staging/hypercall/x86_64/include,public,hvm,pvdrivers.h.html#incontents_pvdrivers.
diff --git a/xen/include/public/hvm/pvdrivers.h 
b/xen/include/public/hvm/pvdrivers.h
index 4c6b705..94fca57 100644
--- a/xen/include/public/hvm/pvdrivers.h
+++ b/xen/include/public/hvm/pvdrivers.h
@@ -25,11 +25,15 @@
 #define _XEN_PUBLIC_PVDRIVERS_H_
 
 /*
+ * `incontents 300 pvdrivers PV Driver Product Numbers
+ *
  * This is the master registry of product numbers for
- * PV drivers. 
+ * PV drivers.
+ *
  * If you need a new product number allocating, please
- * post to xen-devel@xxxxxxxxxxxxxxxxxxxx  You should NOT use
+ * post to xen-devel@xxxxxxxxxxxxxxxxxxxxx  You should NOT use
  * a product number without allocating one.
+ *
  * If you maintain a separate versioning and distribution path
  * for PV drivers you should have a separate product number so
  * that your drivers can be separated from others.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.